00:00:00.000 Started by upstream project "autotest-per-patch" build number 132816 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:04.075 The recommended git tool is: git 00:00:04.076 using credential 00000000-0000-0000-0000-000000000002 00:00:04.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:04.091 Fetching changes from the remote Git repository 00:00:04.093 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:04.104 Using shallow fetch with depth 1 00:00:04.104 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.104 > git --version # timeout=10 00:00:04.118 > git --version # 'git version 2.39.2' 00:00:04.118 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:04.131 Setting http proxy: proxy-dmz.intel.com:911 00:00:04.131 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.019 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.033 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.046 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.046 > git config core.sparsecheckout # timeout=10 00:00:10.059 > git read-tree -mu HEAD # timeout=10 00:00:10.075 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.092 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.092 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.273 [Pipeline] Start of Pipeline 00:00:10.286 [Pipeline] library 00:00:10.287 Loading library shm_lib@master 00:00:10.287 Library shm_lib@master is cached. Copying from home. 00:00:10.302 [Pipeline] node 00:00:25.305 Still waiting to schedule task 00:00:25.305 Waiting for next available executor on ‘vagrant-vm-host’ 00:14:09.267 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:14:09.269 [Pipeline] { 00:14:09.280 [Pipeline] catchError 00:14:09.282 [Pipeline] { 00:14:09.296 [Pipeline] wrap 00:14:09.305 [Pipeline] { 00:14:09.313 [Pipeline] stage 00:14:09.315 [Pipeline] { (Prologue) 00:14:09.333 [Pipeline] echo 00:14:09.335 Node: VM-host-SM38 00:14:09.342 [Pipeline] cleanWs 00:14:09.350 [WS-CLEANUP] Deleting project workspace... 00:14:09.351 [WS-CLEANUP] Deferred wipeout is used... 00:14:09.356 [WS-CLEANUP] done 00:14:09.583 [Pipeline] setCustomBuildProperty 00:14:09.677 [Pipeline] httpRequest 00:14:10.064 [Pipeline] echo 00:14:10.066 Sorcerer 10.211.164.112 is alive 00:14:10.075 [Pipeline] retry 00:14:10.077 [Pipeline] { 00:14:10.090 [Pipeline] httpRequest 00:14:10.095 HttpMethod: GET 00:14:10.095 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:10.096 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:10.097 Response Code: HTTP/1.1 200 OK 00:14:10.098 Success: Status code 200 is in the accepted range: 200,404 00:14:10.099 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:10.249 [Pipeline] } 00:14:10.274 [Pipeline] // retry 00:14:10.282 [Pipeline] sh 00:14:10.576 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:10.591 [Pipeline] httpRequest 00:14:10.977 [Pipeline] echo 00:14:10.979 Sorcerer 10.211.164.112 is alive 00:14:10.989 [Pipeline] retry 00:14:10.991 [Pipeline] { 00:14:11.007 [Pipeline] httpRequest 00:14:11.012 HttpMethod: GET 00:14:11.012 URL: http://10.211.164.112/packages/spdk_43c35d804cc3f84a164f54a32eb57fc61a9856b2.tar.gz 00:14:11.013 Sending request to url: http://10.211.164.112/packages/spdk_43c35d804cc3f84a164f54a32eb57fc61a9856b2.tar.gz 00:14:11.014 Response Code: HTTP/1.1 200 OK 00:14:11.015 Success: Status code 200 is in the accepted range: 200,404 00:14:11.015 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_43c35d804cc3f84a164f54a32eb57fc61a9856b2.tar.gz 00:14:13.292 [Pipeline] } 00:14:13.311 [Pipeline] // retry 00:14:13.318 [Pipeline] sh 00:14:13.631 + tar --no-same-owner -xf spdk_43c35d804cc3f84a164f54a32eb57fc61a9856b2.tar.gz 00:14:16.184 [Pipeline] sh 00:14:16.466 + git -C spdk log --oneline -n5 00:14:16.466 43c35d804 util: multi-level fd_group nesting 00:14:16.466 6336b7c5c util: keep track of nested child fd_groups 00:14:16.466 2e1d23f4b fuse_dispatcher: make header internal 00:14:16.466 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:14:16.466 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:14:16.483 [Pipeline] writeFile 00:14:16.498 [Pipeline] sh 00:14:16.778 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:16.792 [Pipeline] sh 00:14:17.072 + cat autorun-spdk.conf 00:14:17.072 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:17.072 SPDK_RUN_ASAN=1 00:14:17.072 SPDK_RUN_UBSAN=1 00:14:17.072 SPDK_TEST_RAID=1 00:14:17.072 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:17.078 RUN_NIGHTLY=0 00:14:17.081 [Pipeline] } 00:14:17.094 [Pipeline] // stage 00:14:17.109 [Pipeline] stage 00:14:17.112 [Pipeline] { (Run VM) 00:14:17.125 [Pipeline] sh 00:14:17.404 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:17.404 + echo 'Start stage prepare_nvme.sh' 00:14:17.404 Start stage prepare_nvme.sh 00:14:17.404 + [[ -n 4 ]] 00:14:17.404 + disk_prefix=ex4 00:14:17.404 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:14:17.404 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:14:17.404 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:14:17.404 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:17.404 ++ SPDK_RUN_ASAN=1 00:14:17.404 ++ SPDK_RUN_UBSAN=1 00:14:17.404 ++ SPDK_TEST_RAID=1 00:14:17.404 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:17.404 ++ RUN_NIGHTLY=0 00:14:17.404 + cd /var/jenkins/workspace/raid-vg-autotest 00:14:17.404 + nvme_files=() 00:14:17.404 + declare -A nvme_files 00:14:17.404 + backend_dir=/var/lib/libvirt/images/backends 00:14:17.404 + nvme_files['nvme.img']=5G 00:14:17.404 + nvme_files['nvme-cmb.img']=5G 00:14:17.404 + nvme_files['nvme-multi0.img']=4G 00:14:17.404 + nvme_files['nvme-multi1.img']=4G 00:14:17.404 + nvme_files['nvme-multi2.img']=4G 00:14:17.404 + nvme_files['nvme-openstack.img']=8G 00:14:17.404 + nvme_files['nvme-zns.img']=5G 00:14:17.404 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:17.404 + (( SPDK_TEST_FTL == 1 )) 00:14:17.404 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:17.404 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:14:17.404 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:14:17.404 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:14:17.404 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:14:17.404 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:14:17.404 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:14:17.404 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:17.404 + for nvme in "${!nvme_files[@]}" 00:14:17.404 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:14:17.662 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:17.662 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:14:17.662 + echo 'End stage prepare_nvme.sh' 00:14:17.662 End stage prepare_nvme.sh 00:14:17.673 [Pipeline] sh 00:14:17.951 + DISTRO=fedora39 00:14:17.951 + CPUS=10 00:14:17.951 + RAM=12288 00:14:17.951 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:17.951 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:14:17.951 00:14:17.952 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:14:17.952 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:14:17.952 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:14:17.952 HELP=0 00:14:17.952 DRY_RUN=0 00:14:17.952 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:14:17.952 NVME_DISKS_TYPE=nvme,nvme, 00:14:17.952 NVME_AUTO_CREATE=0 00:14:17.952 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:14:17.952 NVME_CMB=,, 00:14:17.952 NVME_PMR=,, 00:14:17.952 NVME_ZNS=,, 00:14:17.952 NVME_MS=,, 00:14:17.952 NVME_FDP=,, 00:14:17.952 SPDK_VAGRANT_DISTRO=fedora39 00:14:17.952 SPDK_VAGRANT_VMCPU=10 00:14:17.952 SPDK_VAGRANT_VMRAM=12288 00:14:17.952 SPDK_VAGRANT_PROVIDER=libvirt 00:14:17.952 SPDK_VAGRANT_HTTP_PROXY= 00:14:17.952 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:17.952 SPDK_OPENSTACK_NETWORK=0 00:14:17.952 VAGRANT_PACKAGE_BOX=0 00:14:17.952 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:14:17.952 FORCE_DISTRO=true 00:14:17.952 VAGRANT_BOX_VERSION= 00:14:17.952 EXTRA_VAGRANTFILES= 00:14:17.952 NIC_MODEL=e1000 00:14:17.952 00:14:17.952 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:14:17.952 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:14:20.478 Bringing machine 'default' up with 'libvirt' provider... 00:14:20.735 ==> default: Creating image (snapshot of base box volume). 00:14:20.735 ==> default: Creating domain with the following settings... 00:14:20.735 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733784955_011e40190e0d64abe8a5 00:14:20.735 ==> default: -- Domain type: kvm 00:14:20.735 ==> default: -- Cpus: 10 00:14:20.735 ==> default: -- Feature: acpi 00:14:20.735 ==> default: -- Feature: apic 00:14:20.735 ==> default: -- Feature: pae 00:14:20.735 ==> default: -- Memory: 12288M 00:14:20.735 ==> default: -- Memory Backing: hugepages: 00:14:20.735 ==> default: -- Management MAC: 00:14:20.735 ==> default: -- Loader: 00:14:20.735 ==> default: -- Nvram: 00:14:20.735 ==> default: -- Base box: spdk/fedora39 00:14:20.735 ==> default: -- Storage pool: default 00:14:20.735 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733784955_011e40190e0d64abe8a5.img (20G) 00:14:20.735 ==> default: -- Volume Cache: default 00:14:20.735 ==> default: -- Kernel: 00:14:20.735 ==> default: -- Initrd: 00:14:20.735 ==> default: -- Graphics Type: vnc 00:14:20.735 ==> default: -- Graphics Port: -1 00:14:20.735 ==> default: -- Graphics IP: 127.0.0.1 00:14:20.735 ==> default: -- Graphics Password: Not defined 00:14:20.735 ==> default: -- Video Type: cirrus 00:14:20.735 ==> default: -- Video VRAM: 9216 00:14:20.735 ==> default: -- Sound Type: 00:14:20.735 ==> default: -- Keymap: en-us 00:14:20.735 ==> default: -- TPM Path: 00:14:20.735 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:20.735 ==> default: -- Command line args: 00:14:20.735 ==> default: -> value=-device, 00:14:20.735 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:20.735 ==> default: -> value=-drive, 00:14:20.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:14:20.735 ==> default: -> value=-device, 00:14:20.735 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.735 ==> default: -> value=-device, 00:14:20.735 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:20.735 ==> default: -> value=-drive, 00:14:20.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:14:20.735 ==> default: -> value=-device, 00:14:20.735 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.735 ==> default: -> value=-drive, 00:14:20.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:14:20.735 ==> default: -> value=-device, 00:14:20.735 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.735 ==> default: -> value=-drive, 00:14:20.735 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:14:20.735 ==> default: -> value=-device, 00:14:20.735 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:20.735 ==> default: Creating shared folders metadata... 00:14:20.735 ==> default: Starting domain. 00:14:22.108 ==> default: Waiting for domain to get an IP address... 00:14:36.991 ==> default: Waiting for SSH to become available... 00:14:36.991 ==> default: Configuring and enabling network interfaces... 00:14:39.531 default: SSH address: 192.168.121.146:22 00:14:39.531 default: SSH username: vagrant 00:14:39.531 default: SSH auth method: private key 00:14:41.444 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:48.030 ==> default: Mounting SSHFS shared folder... 00:14:49.415 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:14:49.415 ==> default: Checking Mount.. 00:14:50.359 ==> default: Folder Successfully Mounted! 00:14:50.359 00:14:50.359 SUCCESS! 00:14:50.359 00:14:50.359 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:14:50.359 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:50.359 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:14:50.359 00:14:50.369 [Pipeline] } 00:14:50.383 [Pipeline] // stage 00:14:50.395 [Pipeline] dir 00:14:50.396 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:14:50.397 [Pipeline] { 00:14:50.410 [Pipeline] catchError 00:14:50.412 [Pipeline] { 00:14:50.423 [Pipeline] sh 00:14:50.753 + vagrant ssh-config --host vagrant 00:14:50.753 + sed -ne '/^Host/,$p' 00:14:50.753 + tee ssh_conf 00:14:53.298 Host vagrant 00:14:53.298 HostName 192.168.121.146 00:14:53.298 User vagrant 00:14:53.298 Port 22 00:14:53.298 UserKnownHostsFile /dev/null 00:14:53.298 StrictHostKeyChecking no 00:14:53.298 PasswordAuthentication no 00:14:53.298 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:14:53.298 IdentitiesOnly yes 00:14:53.298 LogLevel FATAL 00:14:53.298 ForwardAgent yes 00:14:53.298 ForwardX11 yes 00:14:53.298 00:14:53.313 [Pipeline] withEnv 00:14:53.316 [Pipeline] { 00:14:53.329 [Pipeline] sh 00:14:53.666 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:14:53.666 source /etc/os-release 00:14:53.666 [[ -e /image.version ]] && img=$(< /image.version) 00:14:53.666 # Minimal, systemd-like check. 00:14:53.666 if [[ -e /.dockerenv ]]; then 00:14:53.666 # Clear garbage from the node'\''s name: 00:14:53.666 # agt-er_autotest_547-896 -> autotest_547-896 00:14:53.666 # $HOSTNAME is the actual container id 00:14:53.666 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:14:53.666 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:14:53.666 # We can assume this is a mount from a host where container is running, 00:14:53.666 # so fetch its hostname to easily identify the target swarm worker. 00:14:53.666 container="$(< /etc/hostname) ($agent)" 00:14:53.666 else 00:14:53.666 # Fallback 00:14:53.666 container=$agent 00:14:53.666 fi 00:14:53.666 fi 00:14:53.666 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:14:53.666 ' 00:14:53.679 [Pipeline] } 00:14:53.693 [Pipeline] // withEnv 00:14:53.702 [Pipeline] setCustomBuildProperty 00:14:53.717 [Pipeline] stage 00:14:53.720 [Pipeline] { (Tests) 00:14:53.736 [Pipeline] sh 00:14:54.020 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:14:54.035 [Pipeline] sh 00:14:54.318 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:14:54.334 [Pipeline] timeout 00:14:54.334 Timeout set to expire in 1 hr 30 min 00:14:54.336 [Pipeline] { 00:14:54.348 [Pipeline] sh 00:14:54.632 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:14:54.892 HEAD is now at 43c35d804 util: multi-level fd_group nesting 00:14:54.905 [Pipeline] sh 00:14:55.189 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:14:55.465 [Pipeline] sh 00:14:55.805 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:14:55.822 [Pipeline] sh 00:14:56.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:14:56.108 ++ readlink -f spdk_repo 00:14:56.108 + DIR_ROOT=/home/vagrant/spdk_repo 00:14:56.108 + [[ -n /home/vagrant/spdk_repo ]] 00:14:56.108 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:14:56.108 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:14:56.108 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:14:56.108 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:14:56.108 + [[ -d /home/vagrant/spdk_repo/output ]] 00:14:56.108 + [[ raid-vg-autotest == pkgdep-* ]] 00:14:56.108 + cd /home/vagrant/spdk_repo 00:14:56.108 + source /etc/os-release 00:14:56.108 ++ NAME='Fedora Linux' 00:14:56.108 ++ VERSION='39 (Cloud Edition)' 00:14:56.108 ++ ID=fedora 00:14:56.108 ++ VERSION_ID=39 00:14:56.108 ++ VERSION_CODENAME= 00:14:56.108 ++ PLATFORM_ID=platform:f39 00:14:56.108 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:14:56.108 ++ ANSI_COLOR='0;38;2;60;110;180' 00:14:56.108 ++ LOGO=fedora-logo-icon 00:14:56.108 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:14:56.108 ++ HOME_URL=https://fedoraproject.org/ 00:14:56.108 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:14:56.108 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:14:56.108 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:14:56.108 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:14:56.108 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:14:56.108 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:14:56.108 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:14:56.108 ++ SUPPORT_END=2024-11-12 00:14:56.108 ++ VARIANT='Cloud Edition' 00:14:56.108 ++ VARIANT_ID=cloud 00:14:56.108 + uname -a 00:14:56.108 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:14:56.108 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:56.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:56.679 Hugepages 00:14:56.679 node hugesize free / total 00:14:56.679 node0 1048576kB 0 / 0 00:14:56.679 node0 2048kB 0 / 0 00:14:56.679 00:14:56.679 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:56.679 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:56.679 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:56.679 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:56.679 + rm -f /tmp/spdk-ld-path 00:14:56.679 + source autorun-spdk.conf 00:14:56.679 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:56.679 ++ SPDK_RUN_ASAN=1 00:14:56.679 ++ SPDK_RUN_UBSAN=1 00:14:56.679 ++ SPDK_TEST_RAID=1 00:14:56.679 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:56.679 ++ RUN_NIGHTLY=0 00:14:56.679 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:14:56.679 + [[ -n '' ]] 00:14:56.679 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:14:56.679 + for M in /var/spdk/build-*-manifest.txt 00:14:56.679 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:14:56.679 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:56.679 + for M in /var/spdk/build-*-manifest.txt 00:14:56.679 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:14:56.679 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:56.679 + for M in /var/spdk/build-*-manifest.txt 00:14:56.679 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:14:56.679 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:56.679 ++ uname 00:14:56.679 + [[ Linux == \L\i\n\u\x ]] 00:14:56.679 + sudo dmesg -T 00:14:56.679 + sudo dmesg --clear 00:14:56.679 + dmesg_pid=4988 00:14:56.679 + [[ Fedora Linux == FreeBSD ]] 00:14:56.679 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:56.679 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:56.679 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:14:56.679 + [[ -x /usr/src/fio-static/fio ]] 00:14:56.679 + sudo dmesg -Tw 00:14:56.679 + export FIO_BIN=/usr/src/fio-static/fio 00:14:56.679 + FIO_BIN=/usr/src/fio-static/fio 00:14:56.679 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:14:56.679 + [[ ! -v VFIO_QEMU_BIN ]] 00:14:56.679 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:14:56.679 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:56.679 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:56.679 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:14:56.679 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:56.679 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:56.679 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:56.679 22:56:32 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:14:56.679 22:56:32 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:56.679 22:56:32 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:56.679 22:56:32 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:14:56.679 22:56:32 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:14:56.679 22:56:32 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:14:56.679 22:56:32 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:56.679 22:56:32 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:14:56.679 22:56:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:14:56.679 22:56:32 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:56.940 22:56:32 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:14:56.940 22:56:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.940 22:56:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:14:56.940 22:56:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:56.940 22:56:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.940 22:56:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.940 22:56:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.940 22:56:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.940 22:56:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.940 22:56:32 -- paths/export.sh@5 -- $ export PATH 00:14:56.940 22:56:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.940 22:56:32 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:14:56.940 22:56:32 -- common/autobuild_common.sh@493 -- $ date +%s 00:14:56.940 22:56:32 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784992.XXXXXX 00:14:56.940 22:56:32 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784992.tvC55s 00:14:56.940 22:56:32 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:14:56.940 22:56:32 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:14:56.940 22:56:32 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:14:56.940 22:56:32 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:56.940 22:56:32 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:56.940 22:56:32 -- common/autobuild_common.sh@509 -- $ get_config_params 00:14:56.940 22:56:32 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:14:56.940 22:56:32 -- common/autotest_common.sh@10 -- $ set +x 00:14:56.940 22:56:32 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:14:56.940 22:56:32 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:14:56.940 22:56:32 -- pm/common@17 -- $ local monitor 00:14:56.940 22:56:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:56.940 22:56:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:56.940 22:56:32 -- pm/common@25 -- $ sleep 1 00:14:56.940 22:56:32 -- pm/common@21 -- $ date +%s 00:14:56.940 22:56:32 -- pm/common@21 -- $ date +%s 00:14:56.940 22:56:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733784992 00:14:56.940 22:56:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733784992 00:14:56.940 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733784992_collect-cpu-load.pm.log 00:14:56.940 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733784992_collect-vmstat.pm.log 00:14:57.882 22:56:33 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:14:57.882 22:56:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:14:57.882 22:56:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:14:57.882 22:56:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:57.882 22:56:33 -- spdk/autobuild.sh@16 -- $ date -u 00:14:57.882 Mon Dec 9 10:56:33 PM UTC 2024 00:14:57.882 22:56:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:14:57.882 v25.01-pre-315-g43c35d804 00:14:57.882 22:56:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:14:57.882 22:56:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:14:57.882 22:56:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:57.882 22:56:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:57.882 22:56:33 -- common/autotest_common.sh@10 -- $ set +x 00:14:57.882 ************************************ 00:14:57.882 START TEST asan 00:14:57.882 ************************************ 00:14:57.882 using asan 00:14:57.882 22:56:33 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:14:57.882 00:14:57.882 real 0m0.000s 00:14:57.882 user 0m0.000s 00:14:57.882 sys 0m0.000s 00:14:57.882 22:56:33 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:57.882 22:56:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:14:57.882 ************************************ 00:14:57.882 END TEST asan 00:14:57.882 ************************************ 00:14:57.882 22:56:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:14:57.882 22:56:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:14:57.882 22:56:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:57.882 22:56:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:57.882 22:56:33 -- common/autotest_common.sh@10 -- $ set +x 00:14:57.882 ************************************ 00:14:57.882 START TEST ubsan 00:14:57.882 ************************************ 00:14:57.882 using ubsan 00:14:57.882 22:56:33 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:14:57.882 00:14:57.882 real 0m0.000s 00:14:57.882 user 0m0.000s 00:14:57.882 sys 0m0.000s 00:14:57.882 22:56:33 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:57.882 22:56:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:14:57.882 ************************************ 00:14:57.882 END TEST ubsan 00:14:57.882 ************************************ 00:14:57.882 22:56:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:14:57.882 22:56:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:57.882 22:56:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:57.882 22:56:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:57.882 22:56:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:57.882 22:56:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:57.882 22:56:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:57.882 22:56:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:57.882 22:56:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:14:58.143 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:58.143 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:58.403 Using 'verbs' RDMA provider 00:15:08.974 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:15:19.028 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:15:19.028 Creating mk/config.mk...done. 00:15:19.028 Creating mk/cc.flags.mk...done. 00:15:19.028 Type 'make' to build. 00:15:19.028 22:56:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:15:19.028 22:56:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:19.028 22:56:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:19.028 22:56:54 -- common/autotest_common.sh@10 -- $ set +x 00:15:19.028 ************************************ 00:15:19.028 START TEST make 00:15:19.028 ************************************ 00:15:19.028 22:56:54 make -- common/autotest_common.sh@1129 -- $ make -j10 00:15:19.288 make[1]: Nothing to be done for 'all'. 00:15:29.337 The Meson build system 00:15:29.337 Version: 1.5.0 00:15:29.337 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:15:29.337 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:29.337 Build type: native build 00:15:29.337 Program cat found: YES (/usr/bin/cat) 00:15:29.337 Project name: DPDK 00:15:29.337 Project version: 24.03.0 00:15:29.337 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:29.337 C linker for the host machine: cc ld.bfd 2.40-14 00:15:29.337 Host machine cpu family: x86_64 00:15:29.337 Host machine cpu: x86_64 00:15:29.337 Message: ## Building in Developer Mode ## 00:15:29.337 Program pkg-config found: YES (/usr/bin/pkg-config) 00:15:29.337 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:29.337 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:29.337 Program python3 found: YES (/usr/bin/python3) 00:15:29.337 Program cat found: YES (/usr/bin/cat) 00:15:29.337 Compiler for C supports arguments -march=native: YES 00:15:29.337 Checking for size of "void *" : 8 00:15:29.337 Checking for size of "void *" : 8 (cached) 00:15:29.337 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:29.337 Library m found: YES 00:15:29.337 Library numa found: YES 00:15:29.337 Has header "numaif.h" : YES 00:15:29.337 Library fdt found: NO 00:15:29.337 Library execinfo found: NO 00:15:29.337 Has header "execinfo.h" : YES 00:15:29.337 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:29.337 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:29.337 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:29.337 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:29.337 Run-time dependency openssl found: YES 3.1.1 00:15:29.337 Run-time dependency libpcap found: YES 1.10.4 00:15:29.337 Has header "pcap.h" with dependency libpcap: YES 00:15:29.337 Compiler for C supports arguments -Wcast-qual: YES 00:15:29.337 Compiler for C supports arguments -Wdeprecated: YES 00:15:29.337 Compiler for C supports arguments -Wformat: YES 00:15:29.337 Compiler for C supports arguments -Wformat-nonliteral: NO 00:15:29.337 Compiler for C supports arguments -Wformat-security: NO 00:15:29.337 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:29.337 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:29.337 Compiler for C supports arguments -Wnested-externs: YES 00:15:29.337 Compiler for C supports arguments -Wold-style-definition: YES 00:15:29.337 Compiler for C supports arguments -Wpointer-arith: YES 00:15:29.337 Compiler for C supports arguments -Wsign-compare: YES 00:15:29.337 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:29.337 Compiler for C supports arguments -Wundef: YES 00:15:29.337 Compiler for C supports arguments -Wwrite-strings: YES 00:15:29.337 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:29.337 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:15:29.337 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:29.337 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:15:29.337 Program objdump found: YES (/usr/bin/objdump) 00:15:29.337 Compiler for C supports arguments -mavx512f: YES 00:15:29.337 Checking if "AVX512 checking" compiles: YES 00:15:29.337 Fetching value of define "__SSE4_2__" : 1 00:15:29.337 Fetching value of define "__AES__" : 1 00:15:29.337 Fetching value of define "__AVX__" : 1 00:15:29.337 Fetching value of define "__AVX2__" : 1 00:15:29.337 Fetching value of define "__AVX512BW__" : 1 00:15:29.337 Fetching value of define "__AVX512CD__" : 1 00:15:29.337 Fetching value of define "__AVX512DQ__" : 1 00:15:29.337 Fetching value of define "__AVX512F__" : 1 00:15:29.337 Fetching value of define "__AVX512VL__" : 1 00:15:29.337 Fetching value of define "__PCLMUL__" : 1 00:15:29.337 Fetching value of define "__RDRND__" : 1 00:15:29.337 Fetching value of define "__RDSEED__" : 1 00:15:29.337 Fetching value of define "__VPCLMULQDQ__" : 1 00:15:29.337 Fetching value of define "__znver1__" : (undefined) 00:15:29.337 Fetching value of define "__znver2__" : (undefined) 00:15:29.337 Fetching value of define "__znver3__" : (undefined) 00:15:29.337 Fetching value of define "__znver4__" : (undefined) 00:15:29.337 Library asan found: YES 00:15:29.337 Compiler for C supports arguments -Wno-format-truncation: YES 00:15:29.337 Message: lib/log: Defining dependency "log" 00:15:29.337 Message: lib/kvargs: Defining dependency "kvargs" 00:15:29.337 Message: lib/telemetry: Defining dependency "telemetry" 00:15:29.337 Library rt found: YES 00:15:29.337 Checking for function "getentropy" : NO 00:15:29.337 Message: lib/eal: Defining dependency "eal" 00:15:29.337 Message: lib/ring: Defining dependency "ring" 00:15:29.337 Message: lib/rcu: Defining dependency "rcu" 00:15:29.337 Message: lib/mempool: Defining dependency "mempool" 00:15:29.337 Message: lib/mbuf: Defining dependency "mbuf" 00:15:29.337 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:29.337 Fetching value of define "__AVX512F__" : 1 (cached) 00:15:29.337 Fetching value of define "__AVX512BW__" : 1 (cached) 00:15:29.337 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:15:29.337 Fetching value of define "__AVX512VL__" : 1 (cached) 00:15:29.337 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:15:29.337 Compiler for C supports arguments -mpclmul: YES 00:15:29.337 Compiler for C supports arguments -maes: YES 00:15:29.337 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:29.337 Compiler for C supports arguments -mavx512bw: YES 00:15:29.337 Compiler for C supports arguments -mavx512dq: YES 00:15:29.337 Compiler for C supports arguments -mavx512vl: YES 00:15:29.337 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:29.337 Compiler for C supports arguments -mavx2: YES 00:15:29.337 Compiler for C supports arguments -mavx: YES 00:15:29.337 Message: lib/net: Defining dependency "net" 00:15:29.337 Message: lib/meter: Defining dependency "meter" 00:15:29.337 Message: lib/ethdev: Defining dependency "ethdev" 00:15:29.337 Message: lib/pci: Defining dependency "pci" 00:15:29.337 Message: lib/cmdline: Defining dependency "cmdline" 00:15:29.337 Message: lib/hash: Defining dependency "hash" 00:15:29.337 Message: lib/timer: Defining dependency "timer" 00:15:29.337 Message: lib/compressdev: Defining dependency "compressdev" 00:15:29.337 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:29.337 Message: lib/dmadev: Defining dependency "dmadev" 00:15:29.337 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:29.337 Message: lib/power: Defining dependency "power" 00:15:29.337 Message: lib/reorder: Defining dependency "reorder" 00:15:29.337 Message: lib/security: Defining dependency "security" 00:15:29.337 Has header "linux/userfaultfd.h" : YES 00:15:29.337 Has header "linux/vduse.h" : YES 00:15:29.337 Message: lib/vhost: Defining dependency "vhost" 00:15:29.337 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:15:29.337 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:29.337 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:29.337 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:29.337 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:29.337 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:29.337 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:29.337 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:29.337 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:29.338 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:29.338 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:29.338 Configuring doxy-api-html.conf using configuration 00:15:29.338 Configuring doxy-api-man.conf using configuration 00:15:29.338 Program mandb found: YES (/usr/bin/mandb) 00:15:29.338 Program sphinx-build found: NO 00:15:29.338 Configuring rte_build_config.h using configuration 00:15:29.338 Message: 00:15:29.338 ================= 00:15:29.338 Applications Enabled 00:15:29.338 ================= 00:15:29.338 00:15:29.338 apps: 00:15:29.338 00:15:29.338 00:15:29.338 Message: 00:15:29.338 ================= 00:15:29.338 Libraries Enabled 00:15:29.338 ================= 00:15:29.338 00:15:29.338 libs: 00:15:29.338 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:29.338 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:29.338 cryptodev, dmadev, power, reorder, security, vhost, 00:15:29.338 00:15:29.338 Message: 00:15:29.338 =============== 00:15:29.338 Drivers Enabled 00:15:29.338 =============== 00:15:29.338 00:15:29.338 common: 00:15:29.338 00:15:29.338 bus: 00:15:29.338 pci, vdev, 00:15:29.338 mempool: 00:15:29.338 ring, 00:15:29.338 dma: 00:15:29.338 00:15:29.338 net: 00:15:29.338 00:15:29.338 crypto: 00:15:29.338 00:15:29.338 compress: 00:15:29.338 00:15:29.338 vdpa: 00:15:29.338 00:15:29.338 00:15:29.338 Message: 00:15:29.338 ================= 00:15:29.338 Content Skipped 00:15:29.338 ================= 00:15:29.338 00:15:29.338 apps: 00:15:29.338 dumpcap: explicitly disabled via build config 00:15:29.338 graph: explicitly disabled via build config 00:15:29.338 pdump: explicitly disabled via build config 00:15:29.338 proc-info: explicitly disabled via build config 00:15:29.338 test-acl: explicitly disabled via build config 00:15:29.338 test-bbdev: explicitly disabled via build config 00:15:29.338 test-cmdline: explicitly disabled via build config 00:15:29.338 test-compress-perf: explicitly disabled via build config 00:15:29.338 test-crypto-perf: explicitly disabled via build config 00:15:29.338 test-dma-perf: explicitly disabled via build config 00:15:29.338 test-eventdev: explicitly disabled via build config 00:15:29.338 test-fib: explicitly disabled via build config 00:15:29.338 test-flow-perf: explicitly disabled via build config 00:15:29.338 test-gpudev: explicitly disabled via build config 00:15:29.338 test-mldev: explicitly disabled via build config 00:15:29.338 test-pipeline: explicitly disabled via build config 00:15:29.338 test-pmd: explicitly disabled via build config 00:15:29.338 test-regex: explicitly disabled via build config 00:15:29.338 test-sad: explicitly disabled via build config 00:15:29.338 test-security-perf: explicitly disabled via build config 00:15:29.338 00:15:29.338 libs: 00:15:29.338 argparse: explicitly disabled via build config 00:15:29.338 metrics: explicitly disabled via build config 00:15:29.338 acl: explicitly disabled via build config 00:15:29.338 bbdev: explicitly disabled via build config 00:15:29.338 bitratestats: explicitly disabled via build config 00:15:29.338 bpf: explicitly disabled via build config 00:15:29.338 cfgfile: explicitly disabled via build config 00:15:29.338 distributor: explicitly disabled via build config 00:15:29.338 efd: explicitly disabled via build config 00:15:29.338 eventdev: explicitly disabled via build config 00:15:29.338 dispatcher: explicitly disabled via build config 00:15:29.338 gpudev: explicitly disabled via build config 00:15:29.338 gro: explicitly disabled via build config 00:15:29.338 gso: explicitly disabled via build config 00:15:29.338 ip_frag: explicitly disabled via build config 00:15:29.338 jobstats: explicitly disabled via build config 00:15:29.338 latencystats: explicitly disabled via build config 00:15:29.338 lpm: explicitly disabled via build config 00:15:29.338 member: explicitly disabled via build config 00:15:29.338 pcapng: explicitly disabled via build config 00:15:29.338 rawdev: explicitly disabled via build config 00:15:29.338 regexdev: explicitly disabled via build config 00:15:29.338 mldev: explicitly disabled via build config 00:15:29.338 rib: explicitly disabled via build config 00:15:29.338 sched: explicitly disabled via build config 00:15:29.338 stack: explicitly disabled via build config 00:15:29.338 ipsec: explicitly disabled via build config 00:15:29.338 pdcp: explicitly disabled via build config 00:15:29.338 fib: explicitly disabled via build config 00:15:29.338 port: explicitly disabled via build config 00:15:29.338 pdump: explicitly disabled via build config 00:15:29.338 table: explicitly disabled via build config 00:15:29.338 pipeline: explicitly disabled via build config 00:15:29.338 graph: explicitly disabled via build config 00:15:29.338 node: explicitly disabled via build config 00:15:29.338 00:15:29.338 drivers: 00:15:29.338 common/cpt: not in enabled drivers build config 00:15:29.338 common/dpaax: not in enabled drivers build config 00:15:29.338 common/iavf: not in enabled drivers build config 00:15:29.338 common/idpf: not in enabled drivers build config 00:15:29.338 common/ionic: not in enabled drivers build config 00:15:29.338 common/mvep: not in enabled drivers build config 00:15:29.338 common/octeontx: not in enabled drivers build config 00:15:29.338 bus/auxiliary: not in enabled drivers build config 00:15:29.338 bus/cdx: not in enabled drivers build config 00:15:29.338 bus/dpaa: not in enabled drivers build config 00:15:29.338 bus/fslmc: not in enabled drivers build config 00:15:29.338 bus/ifpga: not in enabled drivers build config 00:15:29.338 bus/platform: not in enabled drivers build config 00:15:29.338 bus/uacce: not in enabled drivers build config 00:15:29.338 bus/vmbus: not in enabled drivers build config 00:15:29.338 common/cnxk: not in enabled drivers build config 00:15:29.338 common/mlx5: not in enabled drivers build config 00:15:29.338 common/nfp: not in enabled drivers build config 00:15:29.338 common/nitrox: not in enabled drivers build config 00:15:29.338 common/qat: not in enabled drivers build config 00:15:29.338 common/sfc_efx: not in enabled drivers build config 00:15:29.338 mempool/bucket: not in enabled drivers build config 00:15:29.338 mempool/cnxk: not in enabled drivers build config 00:15:29.338 mempool/dpaa: not in enabled drivers build config 00:15:29.338 mempool/dpaa2: not in enabled drivers build config 00:15:29.338 mempool/octeontx: not in enabled drivers build config 00:15:29.338 mempool/stack: not in enabled drivers build config 00:15:29.338 dma/cnxk: not in enabled drivers build config 00:15:29.338 dma/dpaa: not in enabled drivers build config 00:15:29.338 dma/dpaa2: not in enabled drivers build config 00:15:29.338 dma/hisilicon: not in enabled drivers build config 00:15:29.338 dma/idxd: not in enabled drivers build config 00:15:29.338 dma/ioat: not in enabled drivers build config 00:15:29.338 dma/skeleton: not in enabled drivers build config 00:15:29.338 net/af_packet: not in enabled drivers build config 00:15:29.338 net/af_xdp: not in enabled drivers build config 00:15:29.338 net/ark: not in enabled drivers build config 00:15:29.338 net/atlantic: not in enabled drivers build config 00:15:29.338 net/avp: not in enabled drivers build config 00:15:29.338 net/axgbe: not in enabled drivers build config 00:15:29.338 net/bnx2x: not in enabled drivers build config 00:15:29.338 net/bnxt: not in enabled drivers build config 00:15:29.338 net/bonding: not in enabled drivers build config 00:15:29.338 net/cnxk: not in enabled drivers build config 00:15:29.338 net/cpfl: not in enabled drivers build config 00:15:29.338 net/cxgbe: not in enabled drivers build config 00:15:29.338 net/dpaa: not in enabled drivers build config 00:15:29.338 net/dpaa2: not in enabled drivers build config 00:15:29.338 net/e1000: not in enabled drivers build config 00:15:29.338 net/ena: not in enabled drivers build config 00:15:29.338 net/enetc: not in enabled drivers build config 00:15:29.338 net/enetfec: not in enabled drivers build config 00:15:29.338 net/enic: not in enabled drivers build config 00:15:29.338 net/failsafe: not in enabled drivers build config 00:15:29.338 net/fm10k: not in enabled drivers build config 00:15:29.338 net/gve: not in enabled drivers build config 00:15:29.338 net/hinic: not in enabled drivers build config 00:15:29.338 net/hns3: not in enabled drivers build config 00:15:29.338 net/i40e: not in enabled drivers build config 00:15:29.338 net/iavf: not in enabled drivers build config 00:15:29.338 net/ice: not in enabled drivers build config 00:15:29.338 net/idpf: not in enabled drivers build config 00:15:29.338 net/igc: not in enabled drivers build config 00:15:29.338 net/ionic: not in enabled drivers build config 00:15:29.338 net/ipn3ke: not in enabled drivers build config 00:15:29.338 net/ixgbe: not in enabled drivers build config 00:15:29.338 net/mana: not in enabled drivers build config 00:15:29.338 net/memif: not in enabled drivers build config 00:15:29.338 net/mlx4: not in enabled drivers build config 00:15:29.338 net/mlx5: not in enabled drivers build config 00:15:29.338 net/mvneta: not in enabled drivers build config 00:15:29.338 net/mvpp2: not in enabled drivers build config 00:15:29.338 net/netvsc: not in enabled drivers build config 00:15:29.338 net/nfb: not in enabled drivers build config 00:15:29.338 net/nfp: not in enabled drivers build config 00:15:29.338 net/ngbe: not in enabled drivers build config 00:15:29.338 net/null: not in enabled drivers build config 00:15:29.338 net/octeontx: not in enabled drivers build config 00:15:29.338 net/octeon_ep: not in enabled drivers build config 00:15:29.338 net/pcap: not in enabled drivers build config 00:15:29.338 net/pfe: not in enabled drivers build config 00:15:29.338 net/qede: not in enabled drivers build config 00:15:29.338 net/ring: not in enabled drivers build config 00:15:29.338 net/sfc: not in enabled drivers build config 00:15:29.338 net/softnic: not in enabled drivers build config 00:15:29.338 net/tap: not in enabled drivers build config 00:15:29.338 net/thunderx: not in enabled drivers build config 00:15:29.338 net/txgbe: not in enabled drivers build config 00:15:29.338 net/vdev_netvsc: not in enabled drivers build config 00:15:29.338 net/vhost: not in enabled drivers build config 00:15:29.338 net/virtio: not in enabled drivers build config 00:15:29.338 net/vmxnet3: not in enabled drivers build config 00:15:29.338 raw/*: missing internal dependency, "rawdev" 00:15:29.338 crypto/armv8: not in enabled drivers build config 00:15:29.338 crypto/bcmfs: not in enabled drivers build config 00:15:29.338 crypto/caam_jr: not in enabled drivers build config 00:15:29.338 crypto/ccp: not in enabled drivers build config 00:15:29.339 crypto/cnxk: not in enabled drivers build config 00:15:29.339 crypto/dpaa_sec: not in enabled drivers build config 00:15:29.339 crypto/dpaa2_sec: not in enabled drivers build config 00:15:29.339 crypto/ipsec_mb: not in enabled drivers build config 00:15:29.339 crypto/mlx5: not in enabled drivers build config 00:15:29.339 crypto/mvsam: not in enabled drivers build config 00:15:29.339 crypto/nitrox: not in enabled drivers build config 00:15:29.339 crypto/null: not in enabled drivers build config 00:15:29.339 crypto/octeontx: not in enabled drivers build config 00:15:29.339 crypto/openssl: not in enabled drivers build config 00:15:29.339 crypto/scheduler: not in enabled drivers build config 00:15:29.339 crypto/uadk: not in enabled drivers build config 00:15:29.339 crypto/virtio: not in enabled drivers build config 00:15:29.339 compress/isal: not in enabled drivers build config 00:15:29.339 compress/mlx5: not in enabled drivers build config 00:15:29.339 compress/nitrox: not in enabled drivers build config 00:15:29.339 compress/octeontx: not in enabled drivers build config 00:15:29.339 compress/zlib: not in enabled drivers build config 00:15:29.339 regex/*: missing internal dependency, "regexdev" 00:15:29.339 ml/*: missing internal dependency, "mldev" 00:15:29.339 vdpa/ifc: not in enabled drivers build config 00:15:29.339 vdpa/mlx5: not in enabled drivers build config 00:15:29.339 vdpa/nfp: not in enabled drivers build config 00:15:29.339 vdpa/sfc: not in enabled drivers build config 00:15:29.339 event/*: missing internal dependency, "eventdev" 00:15:29.339 baseband/*: missing internal dependency, "bbdev" 00:15:29.339 gpu/*: missing internal dependency, "gpudev" 00:15:29.339 00:15:29.339 00:15:29.339 Build targets in project: 84 00:15:29.339 00:15:29.339 DPDK 24.03.0 00:15:29.339 00:15:29.339 User defined options 00:15:29.339 buildtype : debug 00:15:29.339 default_library : shared 00:15:29.339 libdir : lib 00:15:29.339 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:29.339 b_sanitize : address 00:15:29.339 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:15:29.339 c_link_args : 00:15:29.339 cpu_instruction_set: native 00:15:29.339 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:29.339 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:29.339 enable_docs : false 00:15:29.339 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:15:29.339 enable_kmods : false 00:15:29.339 max_lcores : 128 00:15:29.339 tests : false 00:15:29.339 00:15:29.339 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:29.600 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:29.862 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:29.863 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:15:29.863 [3/267] Linking static target lib/librte_kvargs.a 00:15:29.863 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:29.863 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:29.863 [6/267] Linking static target lib/librte_log.a 00:15:30.123 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:30.123 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:30.123 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:30.123 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:30.123 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:30.123 [12/267] Linking static target lib/librte_telemetry.a 00:15:30.123 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:30.123 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:30.123 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:30.123 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:30.123 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:30.384 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:30.645 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:30.645 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:30.645 [21/267] Linking target lib/librte_log.so.24.1 00:15:30.645 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:30.645 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:30.645 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:30.645 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:30.907 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:30.907 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:30.907 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:30.907 [29/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:30.907 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:30.907 [31/267] Linking target lib/librte_kvargs.so.24.1 00:15:30.907 [32/267] Linking target lib/librte_telemetry.so.24.1 00:15:30.907 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:30.907 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:31.168 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:31.168 [36/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:31.168 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:31.168 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:31.168 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:31.168 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:31.168 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:31.169 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:31.430 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:31.430 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:31.430 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:31.430 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:31.430 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:31.430 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:31.430 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:31.692 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:31.692 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:15:31.692 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:31.692 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:31.692 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:31.692 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:31.952 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:31.952 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:31.952 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:31.952 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:31.952 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:32.214 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:32.214 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:15:32.214 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:32.214 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:32.214 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:32.214 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:32.474 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:32.474 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:32.474 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:32.474 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:32.474 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:32.474 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:32.474 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:32.735 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:32.735 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:32.735 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:32.735 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:32.735 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:32.735 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:32.995 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:32.995 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:32.995 [82/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:32.995 [83/267] Linking static target lib/librte_ring.a 00:15:32.995 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:32.995 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:32.995 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:32.995 [87/267] Linking static target lib/librte_eal.a 00:15:32.995 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:32.995 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:33.255 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:33.255 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:33.255 [92/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:33.255 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:33.255 [94/267] Linking static target lib/librte_rcu.a 00:15:33.255 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:33.255 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:33.255 [97/267] Linking static target lib/librte_mempool.a 00:15:33.255 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:33.515 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:33.515 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:33.777 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:33.777 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:33.777 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:33.777 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:33.777 [105/267] Linking static target lib/librte_meter.a 00:15:33.777 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:33.777 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:15:33.777 [108/267] Linking static target lib/librte_net.a 00:15:33.777 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:33.777 [110/267] Linking static target lib/librte_mbuf.a 00:15:34.037 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:34.037 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:34.037 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:34.037 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:34.037 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.298 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.298 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.559 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:34.559 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:34.559 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:34.820 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:34.820 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:34.820 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:34.820 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:34.820 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:34.820 [126/267] Linking static target lib/librte_pci.a 00:15:35.081 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:35.081 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:35.081 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:35.081 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:35.081 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:35.081 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:35.081 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:35.081 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:15:35.343 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:35.343 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:35.343 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:35.343 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:35.343 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:35.343 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:35.343 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:35.343 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:35.343 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:35.343 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:35.343 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:35.343 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:35.343 [147/267] Linking static target lib/librte_cmdline.a 00:15:35.604 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:35.604 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:35.604 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:35.604 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:35.604 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:35.866 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:35.866 [154/267] Linking static target lib/librte_timer.a 00:15:35.866 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:35.866 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:36.127 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:36.127 [158/267] Linking static target lib/librte_compressdev.a 00:15:36.127 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:36.127 [160/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:36.127 [161/267] Linking static target lib/librte_hash.a 00:15:36.127 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:36.387 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.387 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:36.387 [165/267] Linking static target lib/librte_dmadev.a 00:15:36.387 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:36.387 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:36.387 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:36.387 [169/267] Linking static target lib/librte_ethdev.a 00:15:36.387 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:36.387 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:36.647 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.648 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.648 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:36.648 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:36.648 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:36.648 [177/267] Linking static target lib/librte_cryptodev.a 00:15:36.907 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:36.907 [179/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.907 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:36.907 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.907 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:36.907 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:37.167 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:37.167 [185/267] Linking static target lib/librte_power.a 00:15:37.167 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:37.167 [187/267] Linking static target lib/librte_reorder.a 00:15:37.167 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:37.167 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:37.429 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:37.429 [191/267] Linking static target lib/librte_security.a 00:15:37.429 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:37.705 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.705 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:37.973 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.973 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.973 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:37.973 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:37.973 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:38.233 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:38.234 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:38.494 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:38.494 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:38.494 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:38.494 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:38.494 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:38.494 [207/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:38.494 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:38.494 [209/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:38.754 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:38.754 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:38.754 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:38.754 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:38.754 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:38.754 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:38.754 [216/267] Linking static target drivers/librte_bus_vdev.a 00:15:38.754 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:38.754 [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:38.754 [219/267] Linking static target drivers/librte_bus_pci.a 00:15:38.754 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:39.016 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:39.016 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:39.016 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:39.016 [224/267] Linking static target drivers/librte_mempool_ring.a 00:15:39.016 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:39.277 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:39.539 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:40.505 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:40.505 [229/267] Linking target lib/librte_eal.so.24.1 00:15:40.765 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:40.765 [231/267] Linking target lib/librte_ring.so.24.1 00:15:40.765 [232/267] Linking target lib/librte_meter.so.24.1 00:15:40.765 [233/267] Linking target lib/librte_pci.so.24.1 00:15:40.765 [234/267] Linking target lib/librte_dmadev.so.24.1 00:15:40.765 [235/267] Linking target lib/librte_timer.so.24.1 00:15:40.765 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:15:40.765 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:40.765 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:15:40.765 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:40.765 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:40.765 [241/267] Linking target lib/librte_rcu.so.24.1 00:15:40.765 [242/267] Linking target lib/librte_mempool.so.24.1 00:15:40.765 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:15:40.765 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:15:41.025 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:41.025 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:41.025 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:15:41.025 [248/267] Linking target lib/librte_mbuf.so.24.1 00:15:41.026 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:41.026 [250/267] Linking target lib/librte_reorder.so.24.1 00:15:41.026 [251/267] Linking target lib/librte_compressdev.so.24.1 00:15:41.026 [252/267] Linking target lib/librte_net.so.24.1 00:15:41.285 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:15:41.285 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:41.285 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:41.285 [256/267] Linking target lib/librte_cmdline.so.24.1 00:15:41.285 [257/267] Linking target lib/librte_hash.so.24.1 00:15:41.285 [258/267] Linking target lib/librte_security.so.24.1 00:15:41.285 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:15:41.858 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.161 [261/267] Linking target lib/librte_ethdev.so.24.1 00:15:42.161 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:15:42.161 [263/267] Linking target lib/librte_power.so.24.1 00:15:42.421 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:42.421 [265/267] Linking static target lib/librte_vhost.a 00:15:43.807 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.807 [267/267] Linking target lib/librte_vhost.so.24.1 00:15:43.807 INFO: autodetecting backend as ninja 00:15:43.807 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:15:58.716 CC lib/ut/ut.o 00:15:58.716 CC lib/log/log.o 00:15:58.717 CC lib/log/log_flags.o 00:15:58.717 CC lib/log/log_deprecated.o 00:15:58.717 CC lib/ut_mock/mock.o 00:15:58.717 LIB libspdk_ut_mock.a 00:15:59.016 LIB libspdk_ut.a 00:15:59.016 SO libspdk_ut_mock.so.6.0 00:15:59.016 LIB libspdk_log.a 00:15:59.016 SO libspdk_ut.so.2.0 00:15:59.016 SO libspdk_log.so.7.1 00:15:59.016 SYMLINK libspdk_ut_mock.so 00:15:59.016 SYMLINK libspdk_ut.so 00:15:59.016 SYMLINK libspdk_log.so 00:15:59.016 CC lib/dma/dma.o 00:15:59.016 CC lib/util/base64.o 00:15:59.016 CC lib/util/bit_array.o 00:15:59.016 CC lib/util/crc16.o 00:15:59.016 CC lib/util/cpuset.o 00:15:59.016 CC lib/ioat/ioat.o 00:15:59.016 CC lib/util/crc32.o 00:15:59.016 CC lib/util/crc32c.o 00:15:59.016 CXX lib/trace_parser/trace.o 00:15:59.297 CC lib/vfio_user/host/vfio_user_pci.o 00:15:59.297 CC lib/util/crc32_ieee.o 00:15:59.297 CC lib/util/crc64.o 00:15:59.297 CC lib/util/dif.o 00:15:59.297 LIB libspdk_dma.a 00:15:59.297 CC lib/util/fd.o 00:15:59.297 CC lib/util/fd_group.o 00:15:59.297 CC lib/util/file.o 00:15:59.297 SO libspdk_dma.so.5.0 00:15:59.297 CC lib/util/hexlify.o 00:15:59.297 CC lib/util/iov.o 00:15:59.297 SYMLINK libspdk_dma.so 00:15:59.297 CC lib/util/math.o 00:15:59.297 LIB libspdk_ioat.a 00:15:59.297 CC lib/util/net.o 00:15:59.297 SO libspdk_ioat.so.7.0 00:15:59.556 CC lib/util/pipe.o 00:15:59.556 CC lib/vfio_user/host/vfio_user.o 00:15:59.556 CC lib/util/strerror_tls.o 00:15:59.556 SYMLINK libspdk_ioat.so 00:15:59.556 CC lib/util/string.o 00:15:59.556 CC lib/util/uuid.o 00:15:59.556 CC lib/util/xor.o 00:15:59.556 CC lib/util/zipf.o 00:15:59.556 CC lib/util/md5.o 00:15:59.556 LIB libspdk_vfio_user.a 00:15:59.556 SO libspdk_vfio_user.so.5.0 00:15:59.814 SYMLINK libspdk_vfio_user.so 00:15:59.814 LIB libspdk_trace_parser.a 00:15:59.814 SO libspdk_trace_parser.so.6.0 00:15:59.814 LIB libspdk_util.a 00:16:00.071 SYMLINK libspdk_trace_parser.so 00:16:00.071 SO libspdk_util.so.10.1 00:16:00.071 SYMLINK libspdk_util.so 00:16:00.329 CC lib/env_dpdk/env.o 00:16:00.329 CC lib/env_dpdk/pci.o 00:16:00.329 CC lib/env_dpdk/memory.o 00:16:00.329 CC lib/env_dpdk/init.o 00:16:00.329 CC lib/env_dpdk/threads.o 00:16:00.329 CC lib/json/json_parse.o 00:16:00.329 CC lib/vmd/vmd.o 00:16:00.329 CC lib/conf/conf.o 00:16:00.329 CC lib/idxd/idxd.o 00:16:00.329 CC lib/rdma_utils/rdma_utils.o 00:16:00.329 CC lib/env_dpdk/pci_ioat.o 00:16:00.329 LIB libspdk_conf.a 00:16:00.329 CC lib/json/json_util.o 00:16:00.587 LIB libspdk_rdma_utils.a 00:16:00.587 SO libspdk_conf.so.6.0 00:16:00.587 SO libspdk_rdma_utils.so.1.0 00:16:00.587 CC lib/env_dpdk/pci_virtio.o 00:16:00.587 CC lib/env_dpdk/pci_vmd.o 00:16:00.587 SYMLINK libspdk_conf.so 00:16:00.587 SYMLINK libspdk_rdma_utils.so 00:16:00.587 CC lib/env_dpdk/pci_idxd.o 00:16:00.587 CC lib/idxd/idxd_user.o 00:16:00.587 CC lib/idxd/idxd_kernel.o 00:16:00.587 CC lib/env_dpdk/pci_event.o 00:16:00.587 CC lib/vmd/led.o 00:16:00.587 CC lib/env_dpdk/sigbus_handler.o 00:16:00.587 CC lib/json/json_write.o 00:16:00.845 CC lib/env_dpdk/pci_dpdk.o 00:16:00.845 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:00.845 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:00.845 LIB libspdk_idxd.a 00:16:00.845 SO libspdk_idxd.so.12.1 00:16:00.845 SYMLINK libspdk_idxd.so 00:16:00.845 LIB libspdk_json.a 00:16:00.845 CC lib/rdma_provider/common.o 00:16:00.845 CC lib/rdma_provider/rdma_provider_verbs.o 00:16:00.845 SO libspdk_json.so.6.0 00:16:00.845 LIB libspdk_vmd.a 00:16:00.845 SO libspdk_vmd.so.6.0 00:16:00.845 SYMLINK libspdk_json.so 00:16:01.104 SYMLINK libspdk_vmd.so 00:16:01.104 LIB libspdk_rdma_provider.a 00:16:01.104 SO libspdk_rdma_provider.so.7.0 00:16:01.104 SYMLINK libspdk_rdma_provider.so 00:16:01.104 CC lib/jsonrpc/jsonrpc_server.o 00:16:01.104 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:01.104 CC lib/jsonrpc/jsonrpc_client.o 00:16:01.104 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:01.362 LIB libspdk_env_dpdk.a 00:16:01.362 LIB libspdk_jsonrpc.a 00:16:01.362 SO libspdk_jsonrpc.so.6.0 00:16:01.362 SO libspdk_env_dpdk.so.15.1 00:16:01.620 SYMLINK libspdk_jsonrpc.so 00:16:01.620 SYMLINK libspdk_env_dpdk.so 00:16:01.620 CC lib/rpc/rpc.o 00:16:01.878 LIB libspdk_rpc.a 00:16:01.878 SO libspdk_rpc.so.6.0 00:16:01.878 SYMLINK libspdk_rpc.so 00:16:02.135 CC lib/notify/notify.o 00:16:02.135 CC lib/notify/notify_rpc.o 00:16:02.135 CC lib/keyring/keyring.o 00:16:02.135 CC lib/keyring/keyring_rpc.o 00:16:02.135 CC lib/trace/trace.o 00:16:02.135 CC lib/trace/trace_rpc.o 00:16:02.135 CC lib/trace/trace_flags.o 00:16:02.394 LIB libspdk_notify.a 00:16:02.394 SO libspdk_notify.so.6.0 00:16:02.394 LIB libspdk_keyring.a 00:16:02.394 SYMLINK libspdk_notify.so 00:16:02.394 SO libspdk_keyring.so.2.0 00:16:02.394 LIB libspdk_trace.a 00:16:02.394 SYMLINK libspdk_keyring.so 00:16:02.394 SO libspdk_trace.so.11.0 00:16:02.394 SYMLINK libspdk_trace.so 00:16:02.651 CC lib/sock/sock.o 00:16:02.651 CC lib/sock/sock_rpc.o 00:16:02.651 CC lib/thread/thread.o 00:16:02.651 CC lib/thread/iobuf.o 00:16:03.216 LIB libspdk_sock.a 00:16:03.216 SO libspdk_sock.so.10.0 00:16:03.216 SYMLINK libspdk_sock.so 00:16:03.473 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:03.473 CC lib/nvme/nvme_ns_cmd.o 00:16:03.473 CC lib/nvme/nvme_ctrlr.o 00:16:03.473 CC lib/nvme/nvme_ns.o 00:16:03.473 CC lib/nvme/nvme_fabric.o 00:16:03.473 CC lib/nvme/nvme.o 00:16:03.473 CC lib/nvme/nvme_qpair.o 00:16:03.473 CC lib/nvme/nvme_pcie_common.o 00:16:03.473 CC lib/nvme/nvme_pcie.o 00:16:04.039 LIB libspdk_thread.a 00:16:04.039 SO libspdk_thread.so.11.0 00:16:04.039 CC lib/nvme/nvme_quirks.o 00:16:04.039 SYMLINK libspdk_thread.so 00:16:04.039 CC lib/nvme/nvme_transport.o 00:16:04.039 CC lib/nvme/nvme_discovery.o 00:16:04.039 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:04.039 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:04.298 CC lib/nvme/nvme_tcp.o 00:16:04.298 CC lib/nvme/nvme_opal.o 00:16:04.298 CC lib/nvme/nvme_io_msg.o 00:16:04.555 CC lib/nvme/nvme_poll_group.o 00:16:04.555 CC lib/nvme/nvme_zns.o 00:16:04.555 CC lib/nvme/nvme_stubs.o 00:16:04.555 CC lib/nvme/nvme_auth.o 00:16:04.555 CC lib/nvme/nvme_cuse.o 00:16:04.812 CC lib/accel/accel.o 00:16:04.812 CC lib/blob/blobstore.o 00:16:04.812 CC lib/blob/request.o 00:16:05.070 CC lib/blob/zeroes.o 00:16:05.070 CC lib/blob/blob_bs_dev.o 00:16:05.070 CC lib/nvme/nvme_rdma.o 00:16:05.070 CC lib/init/json_config.o 00:16:05.327 CC lib/virtio/virtio.o 00:16:05.327 CC lib/fsdev/fsdev.o 00:16:05.327 CC lib/init/subsystem.o 00:16:05.584 CC lib/fsdev/fsdev_io.o 00:16:05.584 CC lib/fsdev/fsdev_rpc.o 00:16:05.584 CC lib/virtio/virtio_vhost_user.o 00:16:05.584 CC lib/init/subsystem_rpc.o 00:16:05.584 CC lib/init/rpc.o 00:16:05.584 CC lib/accel/accel_rpc.o 00:16:05.585 CC lib/virtio/virtio_vfio_user.o 00:16:05.842 CC lib/virtio/virtio_pci.o 00:16:05.842 LIB libspdk_init.a 00:16:05.842 SO libspdk_init.so.6.0 00:16:05.842 CC lib/accel/accel_sw.o 00:16:05.842 SYMLINK libspdk_init.so 00:16:05.842 LIB libspdk_fsdev.a 00:16:05.842 SO libspdk_fsdev.so.2.0 00:16:06.100 SYMLINK libspdk_fsdev.so 00:16:06.100 LIB libspdk_virtio.a 00:16:06.100 CC lib/event/reactor.o 00:16:06.100 CC lib/event/scheduler_static.o 00:16:06.100 CC lib/event/log_rpc.o 00:16:06.100 CC lib/event/app.o 00:16:06.100 CC lib/event/app_rpc.o 00:16:06.100 SO libspdk_virtio.so.7.0 00:16:06.100 LIB libspdk_accel.a 00:16:06.100 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:06.100 SYMLINK libspdk_virtio.so 00:16:06.100 SO libspdk_accel.so.16.0 00:16:06.357 SYMLINK libspdk_accel.so 00:16:06.357 LIB libspdk_nvme.a 00:16:06.668 CC lib/bdev/bdev.o 00:16:06.669 CC lib/bdev/bdev_rpc.o 00:16:06.669 CC lib/bdev/bdev_zone.o 00:16:06.669 CC lib/bdev/part.o 00:16:06.669 CC lib/bdev/scsi_nvme.o 00:16:06.669 LIB libspdk_event.a 00:16:06.669 SO libspdk_event.so.14.0 00:16:06.669 SYMLINK libspdk_event.so 00:16:06.669 SO libspdk_nvme.so.15.0 00:16:06.927 SYMLINK libspdk_nvme.so 00:16:06.927 LIB libspdk_fuse_dispatcher.a 00:16:07.185 SO libspdk_fuse_dispatcher.so.1.0 00:16:07.185 SYMLINK libspdk_fuse_dispatcher.so 00:16:07.750 LIB libspdk_blob.a 00:16:07.750 SO libspdk_blob.so.12.0 00:16:07.750 SYMLINK libspdk_blob.so 00:16:08.008 CC lib/lvol/lvol.o 00:16:08.008 CC lib/blobfs/blobfs.o 00:16:08.008 CC lib/blobfs/tree.o 00:16:08.941 LIB libspdk_blobfs.a 00:16:08.941 SO libspdk_blobfs.so.11.0 00:16:08.941 SYMLINK libspdk_blobfs.so 00:16:08.941 LIB libspdk_lvol.a 00:16:08.941 SO libspdk_lvol.so.11.0 00:16:09.199 SYMLINK libspdk_lvol.so 00:16:09.456 LIB libspdk_bdev.a 00:16:09.456 SO libspdk_bdev.so.17.0 00:16:09.713 SYMLINK libspdk_bdev.so 00:16:09.713 CC lib/ftl/ftl_core.o 00:16:09.713 CC lib/ftl/ftl_init.o 00:16:09.713 CC lib/ftl/ftl_layout.o 00:16:09.713 CC lib/ftl/ftl_io.o 00:16:09.714 CC lib/nvmf/ctrlr.o 00:16:09.714 CC lib/ftl/ftl_debug.o 00:16:09.714 CC lib/nvmf/ctrlr_discovery.o 00:16:09.714 CC lib/nbd/nbd.o 00:16:09.714 CC lib/ublk/ublk.o 00:16:09.714 CC lib/scsi/dev.o 00:16:09.995 CC lib/ftl/ftl_sb.o 00:16:09.995 CC lib/nvmf/ctrlr_bdev.o 00:16:09.995 CC lib/scsi/lun.o 00:16:09.995 CC lib/scsi/port.o 00:16:09.995 CC lib/ftl/ftl_l2p.o 00:16:09.995 CC lib/nvmf/subsystem.o 00:16:09.995 CC lib/nvmf/nvmf.o 00:16:10.253 CC lib/nbd/nbd_rpc.o 00:16:10.253 CC lib/ftl/ftl_l2p_flat.o 00:16:10.253 CC lib/nvmf/nvmf_rpc.o 00:16:10.253 CC lib/nvmf/transport.o 00:16:10.253 CC lib/scsi/scsi.o 00:16:10.253 LIB libspdk_nbd.a 00:16:10.253 SO libspdk_nbd.so.7.0 00:16:10.253 CC lib/ftl/ftl_nv_cache.o 00:16:10.511 CC lib/scsi/scsi_bdev.o 00:16:10.511 SYMLINK libspdk_nbd.so 00:16:10.511 CC lib/scsi/scsi_pr.o 00:16:10.511 CC lib/ublk/ublk_rpc.o 00:16:10.511 CC lib/nvmf/tcp.o 00:16:10.511 LIB libspdk_ublk.a 00:16:10.511 SO libspdk_ublk.so.3.0 00:16:10.769 SYMLINK libspdk_ublk.so 00:16:10.769 CC lib/nvmf/stubs.o 00:16:10.769 CC lib/scsi/scsi_rpc.o 00:16:10.769 CC lib/ftl/ftl_band.o 00:16:10.769 CC lib/scsi/task.o 00:16:11.026 CC lib/nvmf/mdns_server.o 00:16:11.026 CC lib/nvmf/rdma.o 00:16:11.026 CC lib/nvmf/auth.o 00:16:11.026 CC lib/ftl/ftl_band_ops.o 00:16:11.026 CC lib/ftl/ftl_writer.o 00:16:11.026 LIB libspdk_scsi.a 00:16:11.284 SO libspdk_scsi.so.9.0 00:16:11.284 SYMLINK libspdk_scsi.so 00:16:11.284 CC lib/ftl/ftl_rq.o 00:16:11.284 CC lib/ftl/ftl_reloc.o 00:16:11.284 CC lib/ftl/ftl_l2p_cache.o 00:16:11.284 CC lib/ftl/ftl_p2l.o 00:16:11.284 CC lib/ftl/ftl_p2l_log.o 00:16:11.541 CC lib/ftl/mngt/ftl_mngt.o 00:16:11.541 CC lib/iscsi/conn.o 00:16:11.541 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:11.541 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:11.541 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:11.800 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:11.800 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:11.800 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:11.800 CC lib/iscsi/init_grp.o 00:16:11.800 CC lib/iscsi/iscsi.o 00:16:11.800 CC lib/iscsi/param.o 00:16:11.800 CC lib/iscsi/portal_grp.o 00:16:11.800 CC lib/iscsi/tgt_node.o 00:16:12.057 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:12.057 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:12.057 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:12.057 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:12.057 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:12.057 CC lib/iscsi/iscsi_subsystem.o 00:16:12.057 CC lib/vhost/vhost.o 00:16:12.057 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:12.314 CC lib/ftl/utils/ftl_conf.o 00:16:12.314 CC lib/ftl/utils/ftl_md.o 00:16:12.314 CC lib/ftl/utils/ftl_mempool.o 00:16:12.314 CC lib/vhost/vhost_rpc.o 00:16:12.314 CC lib/ftl/utils/ftl_bitmap.o 00:16:12.314 CC lib/ftl/utils/ftl_property.o 00:16:12.314 CC lib/iscsi/iscsi_rpc.o 00:16:12.572 CC lib/iscsi/task.o 00:16:12.572 CC lib/vhost/vhost_scsi.o 00:16:12.572 CC lib/vhost/vhost_blk.o 00:16:12.572 CC lib/vhost/rte_vhost_user.o 00:16:12.572 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:12.572 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:12.830 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:12.830 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:12.830 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:12.830 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:12.830 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:12.830 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:12.830 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:13.087 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:13.087 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:13.087 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:13.087 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:13.087 CC lib/ftl/base/ftl_base_dev.o 00:16:13.087 CC lib/ftl/base/ftl_base_bdev.o 00:16:13.087 CC lib/ftl/ftl_trace.o 00:16:13.345 LIB libspdk_iscsi.a 00:16:13.345 LIB libspdk_nvmf.a 00:16:13.345 SO libspdk_iscsi.so.8.0 00:16:13.345 LIB libspdk_ftl.a 00:16:13.345 SO libspdk_nvmf.so.20.0 00:16:13.603 SYMLINK libspdk_iscsi.so 00:16:13.603 SO libspdk_ftl.so.9.0 00:16:13.603 LIB libspdk_vhost.a 00:16:13.603 SO libspdk_vhost.so.8.0 00:16:13.603 SYMLINK libspdk_nvmf.so 00:16:13.603 SYMLINK libspdk_vhost.so 00:16:13.861 SYMLINK libspdk_ftl.so 00:16:14.118 CC module/env_dpdk/env_dpdk_rpc.o 00:16:14.118 CC module/keyring/file/keyring.o 00:16:14.118 CC module/accel/ioat/accel_ioat.o 00:16:14.118 CC module/accel/error/accel_error.o 00:16:14.118 CC module/accel/dsa/accel_dsa.o 00:16:14.118 CC module/fsdev/aio/fsdev_aio.o 00:16:14.118 CC module/accel/iaa/accel_iaa.o 00:16:14.118 CC module/sock/posix/posix.o 00:16:14.118 CC module/blob/bdev/blob_bdev.o 00:16:14.118 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:14.377 LIB libspdk_env_dpdk_rpc.a 00:16:14.377 SO libspdk_env_dpdk_rpc.so.6.0 00:16:14.377 CC module/keyring/file/keyring_rpc.o 00:16:14.377 SYMLINK libspdk_env_dpdk_rpc.so 00:16:14.377 CC module/accel/error/accel_error_rpc.o 00:16:14.377 CC module/accel/ioat/accel_ioat_rpc.o 00:16:14.377 LIB libspdk_scheduler_dynamic.a 00:16:14.377 CC module/accel/iaa/accel_iaa_rpc.o 00:16:14.377 SO libspdk_scheduler_dynamic.so.4.0 00:16:14.377 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:14.377 LIB libspdk_keyring_file.a 00:16:14.377 SO libspdk_keyring_file.so.2.0 00:16:14.377 LIB libspdk_accel_error.a 00:16:14.377 LIB libspdk_accel_ioat.a 00:16:14.377 SYMLINK libspdk_scheduler_dynamic.so 00:16:14.634 SO libspdk_accel_error.so.2.0 00:16:14.634 SYMLINK libspdk_keyring_file.so 00:16:14.634 LIB libspdk_blob_bdev.a 00:16:14.634 SO libspdk_accel_ioat.so.6.0 00:16:14.634 CC module/accel/dsa/accel_dsa_rpc.o 00:16:14.634 SO libspdk_blob_bdev.so.12.0 00:16:14.634 LIB libspdk_accel_iaa.a 00:16:14.634 SYMLINK libspdk_accel_error.so 00:16:14.634 SYMLINK libspdk_accel_ioat.so 00:16:14.634 CC module/fsdev/aio/linux_aio_mgr.o 00:16:14.634 SO libspdk_accel_iaa.so.3.0 00:16:14.634 SYMLINK libspdk_blob_bdev.so 00:16:14.634 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:14.634 LIB libspdk_accel_dsa.a 00:16:14.634 CC module/keyring/linux/keyring.o 00:16:14.634 SYMLINK libspdk_accel_iaa.so 00:16:14.634 SO libspdk_accel_dsa.so.5.0 00:16:14.634 CC module/scheduler/gscheduler/gscheduler.o 00:16:14.634 SYMLINK libspdk_accel_dsa.so 00:16:14.891 LIB libspdk_scheduler_dpdk_governor.a 00:16:14.891 CC module/keyring/linux/keyring_rpc.o 00:16:14.891 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:14.891 CC module/bdev/delay/vbdev_delay.o 00:16:14.891 CC module/blobfs/bdev/blobfs_bdev.o 00:16:14.891 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:14.891 CC module/bdev/error/vbdev_error.o 00:16:14.891 LIB libspdk_scheduler_gscheduler.a 00:16:14.891 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:14.891 CC module/bdev/error/vbdev_error_rpc.o 00:16:14.891 SO libspdk_scheduler_gscheduler.so.4.0 00:16:14.891 CC module/bdev/gpt/gpt.o 00:16:14.891 LIB libspdk_keyring_linux.a 00:16:14.891 LIB libspdk_sock_posix.a 00:16:14.891 SYMLINK libspdk_scheduler_gscheduler.so 00:16:14.891 CC module/bdev/gpt/vbdev_gpt.o 00:16:14.891 SO libspdk_keyring_linux.so.1.0 00:16:14.891 LIB libspdk_fsdev_aio.a 00:16:14.891 SO libspdk_sock_posix.so.6.0 00:16:14.891 SYMLINK libspdk_keyring_linux.so 00:16:14.891 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:14.891 SO libspdk_fsdev_aio.so.1.0 00:16:14.891 LIB libspdk_blobfs_bdev.a 00:16:14.891 SO libspdk_blobfs_bdev.so.6.0 00:16:14.891 SYMLINK libspdk_sock_posix.so 00:16:15.149 SYMLINK libspdk_fsdev_aio.so 00:16:15.149 SYMLINK libspdk_blobfs_bdev.so 00:16:15.149 LIB libspdk_bdev_error.a 00:16:15.149 SO libspdk_bdev_error.so.6.0 00:16:15.149 LIB libspdk_bdev_gpt.a 00:16:15.149 LIB libspdk_bdev_delay.a 00:16:15.149 CC module/bdev/lvol/vbdev_lvol.o 00:16:15.149 SYMLINK libspdk_bdev_error.so 00:16:15.149 CC module/bdev/malloc/bdev_malloc.o 00:16:15.149 SO libspdk_bdev_gpt.so.6.0 00:16:15.149 CC module/bdev/null/bdev_null.o 00:16:15.149 SO libspdk_bdev_delay.so.6.0 00:16:15.149 CC module/bdev/nvme/bdev_nvme.o 00:16:15.149 CC module/bdev/raid/bdev_raid.o 00:16:15.149 CC module/bdev/passthru/vbdev_passthru.o 00:16:15.149 SYMLINK libspdk_bdev_delay.so 00:16:15.149 SYMLINK libspdk_bdev_gpt.so 00:16:15.149 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:15.149 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:15.149 CC module/bdev/split/vbdev_split.o 00:16:15.149 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:15.407 CC module/bdev/nvme/nvme_rpc.o 00:16:15.407 CC module/bdev/null/bdev_null_rpc.o 00:16:15.407 CC module/bdev/split/vbdev_split_rpc.o 00:16:15.407 LIB libspdk_bdev_passthru.a 00:16:15.407 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:15.407 SO libspdk_bdev_passthru.so.6.0 00:16:15.665 LIB libspdk_bdev_null.a 00:16:15.665 SYMLINK libspdk_bdev_passthru.so 00:16:15.665 CC module/bdev/nvme/bdev_mdns_client.o 00:16:15.665 SO libspdk_bdev_null.so.6.0 00:16:15.665 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:15.665 LIB libspdk_bdev_split.a 00:16:15.665 SO libspdk_bdev_split.so.6.0 00:16:15.665 CC module/bdev/aio/bdev_aio.o 00:16:15.665 SYMLINK libspdk_bdev_null.so 00:16:15.665 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:15.665 LIB libspdk_bdev_malloc.a 00:16:15.665 SYMLINK libspdk_bdev_split.so 00:16:15.665 CC module/bdev/raid/bdev_raid_rpc.o 00:16:15.665 SO libspdk_bdev_malloc.so.6.0 00:16:15.665 LIB libspdk_bdev_zone_block.a 00:16:15.665 SYMLINK libspdk_bdev_malloc.so 00:16:15.665 CC module/bdev/raid/bdev_raid_sb.o 00:16:15.665 CC module/bdev/ftl/bdev_ftl.o 00:16:15.665 SO libspdk_bdev_zone_block.so.6.0 00:16:15.923 CC module/bdev/iscsi/bdev_iscsi.o 00:16:15.923 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:15.923 SYMLINK libspdk_bdev_zone_block.so 00:16:15.923 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:15.923 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:15.923 CC module/bdev/aio/bdev_aio_rpc.o 00:16:15.923 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:15.923 LIB libspdk_bdev_lvol.a 00:16:15.923 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:15.923 SO libspdk_bdev_lvol.so.6.0 00:16:16.182 LIB libspdk_bdev_aio.a 00:16:16.182 SYMLINK libspdk_bdev_lvol.so 00:16:16.182 CC module/bdev/raid/raid0.o 00:16:16.182 LIB libspdk_bdev_ftl.a 00:16:16.182 SO libspdk_bdev_aio.so.6.0 00:16:16.182 SO libspdk_bdev_ftl.so.6.0 00:16:16.182 CC module/bdev/nvme/vbdev_opal.o 00:16:16.182 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:16.182 SYMLINK libspdk_bdev_aio.so 00:16:16.182 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:16.182 CC module/bdev/raid/raid1.o 00:16:16.182 SYMLINK libspdk_bdev_ftl.so 00:16:16.182 CC module/bdev/raid/concat.o 00:16:16.182 LIB libspdk_bdev_iscsi.a 00:16:16.182 SO libspdk_bdev_iscsi.so.6.0 00:16:16.182 CC module/bdev/raid/raid5f.o 00:16:16.182 SYMLINK libspdk_bdev_iscsi.so 00:16:16.441 LIB libspdk_bdev_virtio.a 00:16:16.441 SO libspdk_bdev_virtio.so.6.0 00:16:16.441 SYMLINK libspdk_bdev_virtio.so 00:16:16.698 LIB libspdk_bdev_raid.a 00:16:16.698 SO libspdk_bdev_raid.so.6.0 00:16:16.955 SYMLINK libspdk_bdev_raid.so 00:16:17.522 LIB libspdk_bdev_nvme.a 00:16:17.522 SO libspdk_bdev_nvme.so.7.1 00:16:17.522 SYMLINK libspdk_bdev_nvme.so 00:16:18.089 CC module/event/subsystems/iobuf/iobuf.o 00:16:18.089 CC module/event/subsystems/fsdev/fsdev.o 00:16:18.089 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:18.089 CC module/event/subsystems/keyring/keyring.o 00:16:18.089 CC module/event/subsystems/sock/sock.o 00:16:18.089 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:18.089 CC module/event/subsystems/scheduler/scheduler.o 00:16:18.089 CC module/event/subsystems/vmd/vmd.o 00:16:18.089 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:18.089 LIB libspdk_event_keyring.a 00:16:18.089 LIB libspdk_event_sock.a 00:16:18.089 LIB libspdk_event_fsdev.a 00:16:18.089 LIB libspdk_event_scheduler.a 00:16:18.089 LIB libspdk_event_vhost_blk.a 00:16:18.089 SO libspdk_event_keyring.so.1.0 00:16:18.089 LIB libspdk_event_iobuf.a 00:16:18.089 SO libspdk_event_scheduler.so.4.0 00:16:18.089 LIB libspdk_event_vmd.a 00:16:18.089 SO libspdk_event_sock.so.5.0 00:16:18.089 SO libspdk_event_fsdev.so.1.0 00:16:18.089 SO libspdk_event_vhost_blk.so.3.0 00:16:18.089 SO libspdk_event_vmd.so.6.0 00:16:18.089 SO libspdk_event_iobuf.so.3.0 00:16:18.089 SYMLINK libspdk_event_keyring.so 00:16:18.089 SYMLINK libspdk_event_sock.so 00:16:18.089 SYMLINK libspdk_event_scheduler.so 00:16:18.089 SYMLINK libspdk_event_fsdev.so 00:16:18.089 SYMLINK libspdk_event_vhost_blk.so 00:16:18.089 SYMLINK libspdk_event_vmd.so 00:16:18.089 SYMLINK libspdk_event_iobuf.so 00:16:18.347 CC module/event/subsystems/accel/accel.o 00:16:18.606 LIB libspdk_event_accel.a 00:16:18.606 SO libspdk_event_accel.so.6.0 00:16:18.606 SYMLINK libspdk_event_accel.so 00:16:18.864 CC module/event/subsystems/bdev/bdev.o 00:16:18.864 LIB libspdk_event_bdev.a 00:16:19.121 SO libspdk_event_bdev.so.6.0 00:16:19.121 SYMLINK libspdk_event_bdev.so 00:16:19.121 CC module/event/subsystems/ublk/ublk.o 00:16:19.121 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:19.121 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:19.121 CC module/event/subsystems/nbd/nbd.o 00:16:19.121 CC module/event/subsystems/scsi/scsi.o 00:16:19.378 LIB libspdk_event_ublk.a 00:16:19.378 LIB libspdk_event_nbd.a 00:16:19.378 LIB libspdk_event_scsi.a 00:16:19.378 SO libspdk_event_nbd.so.6.0 00:16:19.378 SO libspdk_event_ublk.so.3.0 00:16:19.378 SO libspdk_event_scsi.so.6.0 00:16:19.378 SYMLINK libspdk_event_nbd.so 00:16:19.378 SYMLINK libspdk_event_ublk.so 00:16:19.378 SYMLINK libspdk_event_scsi.so 00:16:19.378 LIB libspdk_event_nvmf.a 00:16:19.378 SO libspdk_event_nvmf.so.6.0 00:16:19.635 SYMLINK libspdk_event_nvmf.so 00:16:19.636 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:19.636 CC module/event/subsystems/iscsi/iscsi.o 00:16:19.636 LIB libspdk_event_vhost_scsi.a 00:16:19.636 LIB libspdk_event_iscsi.a 00:16:19.636 SO libspdk_event_vhost_scsi.so.3.0 00:16:19.636 SO libspdk_event_iscsi.so.6.0 00:16:19.636 SYMLINK libspdk_event_vhost_scsi.so 00:16:19.894 SYMLINK libspdk_event_iscsi.so 00:16:19.894 SO libspdk.so.6.0 00:16:19.894 SYMLINK libspdk.so 00:16:20.152 CXX app/trace/trace.o 00:16:20.152 CC app/spdk_lspci/spdk_lspci.o 00:16:20.152 CC app/trace_record/trace_record.o 00:16:20.152 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:20.152 CC app/nvmf_tgt/nvmf_main.o 00:16:20.152 CC app/iscsi_tgt/iscsi_tgt.o 00:16:20.152 CC examples/util/zipf/zipf.o 00:16:20.152 CC test/thread/poller_perf/poller_perf.o 00:16:20.152 CC examples/ioat/perf/perf.o 00:16:20.152 CC app/spdk_tgt/spdk_tgt.o 00:16:20.152 LINK spdk_lspci 00:16:20.152 LINK interrupt_tgt 00:16:20.152 LINK nvmf_tgt 00:16:20.152 LINK spdk_trace_record 00:16:20.152 LINK poller_perf 00:16:20.152 LINK zipf 00:16:20.152 LINK iscsi_tgt 00:16:20.410 LINK ioat_perf 00:16:20.410 LINK spdk_tgt 00:16:20.410 CC app/spdk_nvme_perf/perf.o 00:16:20.410 LINK spdk_trace 00:16:20.410 CC app/spdk_nvme_identify/identify.o 00:16:20.410 CC app/spdk_nvme_discover/discovery_aer.o 00:16:20.410 TEST_HEADER include/spdk/accel.h 00:16:20.410 TEST_HEADER include/spdk/accel_module.h 00:16:20.410 TEST_HEADER include/spdk/assert.h 00:16:20.410 TEST_HEADER include/spdk/barrier.h 00:16:20.410 TEST_HEADER include/spdk/base64.h 00:16:20.410 TEST_HEADER include/spdk/bdev.h 00:16:20.410 TEST_HEADER include/spdk/bdev_module.h 00:16:20.410 TEST_HEADER include/spdk/bdev_zone.h 00:16:20.410 TEST_HEADER include/spdk/bit_array.h 00:16:20.410 TEST_HEADER include/spdk/bit_pool.h 00:16:20.410 TEST_HEADER include/spdk/blob_bdev.h 00:16:20.410 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:20.410 TEST_HEADER include/spdk/blobfs.h 00:16:20.410 TEST_HEADER include/spdk/blob.h 00:16:20.410 TEST_HEADER include/spdk/conf.h 00:16:20.410 TEST_HEADER include/spdk/config.h 00:16:20.410 TEST_HEADER include/spdk/cpuset.h 00:16:20.410 TEST_HEADER include/spdk/crc16.h 00:16:20.410 TEST_HEADER include/spdk/crc32.h 00:16:20.410 TEST_HEADER include/spdk/crc64.h 00:16:20.410 TEST_HEADER include/spdk/dif.h 00:16:20.410 TEST_HEADER include/spdk/dma.h 00:16:20.410 TEST_HEADER include/spdk/endian.h 00:16:20.410 TEST_HEADER include/spdk/env_dpdk.h 00:16:20.410 TEST_HEADER include/spdk/env.h 00:16:20.410 CC test/dma/test_dma/test_dma.o 00:16:20.410 TEST_HEADER include/spdk/event.h 00:16:20.410 TEST_HEADER include/spdk/fd_group.h 00:16:20.410 CC examples/ioat/verify/verify.o 00:16:20.410 TEST_HEADER include/spdk/fd.h 00:16:20.410 TEST_HEADER include/spdk/file.h 00:16:20.410 TEST_HEADER include/spdk/fsdev.h 00:16:20.410 TEST_HEADER include/spdk/fsdev_module.h 00:16:20.410 TEST_HEADER include/spdk/ftl.h 00:16:20.410 TEST_HEADER include/spdk/gpt_spec.h 00:16:20.410 TEST_HEADER include/spdk/hexlify.h 00:16:20.410 TEST_HEADER include/spdk/histogram_data.h 00:16:20.410 TEST_HEADER include/spdk/idxd.h 00:16:20.410 TEST_HEADER include/spdk/idxd_spec.h 00:16:20.668 TEST_HEADER include/spdk/init.h 00:16:20.668 TEST_HEADER include/spdk/ioat.h 00:16:20.668 TEST_HEADER include/spdk/ioat_spec.h 00:16:20.668 TEST_HEADER include/spdk/iscsi_spec.h 00:16:20.668 TEST_HEADER include/spdk/json.h 00:16:20.668 TEST_HEADER include/spdk/jsonrpc.h 00:16:20.668 TEST_HEADER include/spdk/keyring.h 00:16:20.668 TEST_HEADER include/spdk/keyring_module.h 00:16:20.668 TEST_HEADER include/spdk/likely.h 00:16:20.668 TEST_HEADER include/spdk/log.h 00:16:20.668 TEST_HEADER include/spdk/lvol.h 00:16:20.668 TEST_HEADER include/spdk/md5.h 00:16:20.668 TEST_HEADER include/spdk/memory.h 00:16:20.668 TEST_HEADER include/spdk/mmio.h 00:16:20.668 TEST_HEADER include/spdk/nbd.h 00:16:20.668 TEST_HEADER include/spdk/net.h 00:16:20.668 TEST_HEADER include/spdk/notify.h 00:16:20.668 CC test/app/bdev_svc/bdev_svc.o 00:16:20.668 TEST_HEADER include/spdk/nvme.h 00:16:20.668 TEST_HEADER include/spdk/nvme_intel.h 00:16:20.668 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:20.668 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:20.668 TEST_HEADER include/spdk/nvme_spec.h 00:16:20.668 TEST_HEADER include/spdk/nvme_zns.h 00:16:20.668 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:20.668 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:20.668 TEST_HEADER include/spdk/nvmf.h 00:16:20.668 TEST_HEADER include/spdk/nvmf_spec.h 00:16:20.668 TEST_HEADER include/spdk/nvmf_transport.h 00:16:20.668 TEST_HEADER include/spdk/opal.h 00:16:20.668 CC app/spdk_top/spdk_top.o 00:16:20.668 TEST_HEADER include/spdk/opal_spec.h 00:16:20.668 TEST_HEADER include/spdk/pci_ids.h 00:16:20.668 TEST_HEADER include/spdk/pipe.h 00:16:20.668 CC test/env/vtophys/vtophys.o 00:16:20.668 TEST_HEADER include/spdk/queue.h 00:16:20.668 TEST_HEADER include/spdk/reduce.h 00:16:20.668 TEST_HEADER include/spdk/rpc.h 00:16:20.668 TEST_HEADER include/spdk/scheduler.h 00:16:20.668 TEST_HEADER include/spdk/scsi.h 00:16:20.668 TEST_HEADER include/spdk/scsi_spec.h 00:16:20.668 TEST_HEADER include/spdk/sock.h 00:16:20.668 TEST_HEADER include/spdk/stdinc.h 00:16:20.668 TEST_HEADER include/spdk/string.h 00:16:20.668 TEST_HEADER include/spdk/thread.h 00:16:20.668 TEST_HEADER include/spdk/trace.h 00:16:20.668 TEST_HEADER include/spdk/trace_parser.h 00:16:20.668 TEST_HEADER include/spdk/tree.h 00:16:20.668 TEST_HEADER include/spdk/ublk.h 00:16:20.668 TEST_HEADER include/spdk/util.h 00:16:20.668 TEST_HEADER include/spdk/uuid.h 00:16:20.668 TEST_HEADER include/spdk/version.h 00:16:20.668 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:20.668 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:20.668 TEST_HEADER include/spdk/vhost.h 00:16:20.668 LINK spdk_nvme_discover 00:16:20.668 TEST_HEADER include/spdk/vmd.h 00:16:20.668 TEST_HEADER include/spdk/xor.h 00:16:20.668 TEST_HEADER include/spdk/zipf.h 00:16:20.668 CXX test/cpp_headers/accel.o 00:16:20.668 CC test/env/mem_callbacks/mem_callbacks.o 00:16:20.668 LINK verify 00:16:20.668 LINK vtophys 00:16:20.668 LINK bdev_svc 00:16:20.668 CXX test/cpp_headers/accel_module.o 00:16:20.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:20.926 CXX test/cpp_headers/assert.o 00:16:20.926 LINK test_dma 00:16:20.926 CC examples/sock/hello_world/hello_sock.o 00:16:20.926 CC examples/thread/thread/thread_ex.o 00:16:20.926 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:20.926 LINK env_dpdk_post_init 00:16:20.926 CXX test/cpp_headers/barrier.o 00:16:21.183 CXX test/cpp_headers/base64.o 00:16:21.183 LINK hello_sock 00:16:21.183 CC test/env/memory/memory_ut.o 00:16:21.184 CC test/env/pci/pci_ut.o 00:16:21.184 LINK mem_callbacks 00:16:21.184 LINK spdk_nvme_perf 00:16:21.184 LINK thread 00:16:21.184 LINK spdk_nvme_identify 00:16:21.184 CXX test/cpp_headers/bdev.o 00:16:21.184 LINK nvme_fuzz 00:16:21.441 LINK spdk_top 00:16:21.441 CXX test/cpp_headers/bdev_module.o 00:16:21.441 CC examples/vmd/lsvmd/lsvmd.o 00:16:21.441 CC examples/vmd/led/led.o 00:16:21.441 CC test/app/histogram_perf/histogram_perf.o 00:16:21.441 CC examples/idxd/perf/perf.o 00:16:21.441 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:21.441 CC test/app/jsoncat/jsoncat.o 00:16:21.441 LINK lsvmd 00:16:21.441 CXX test/cpp_headers/bdev_zone.o 00:16:21.441 LINK led 00:16:21.698 LINK pci_ut 00:16:21.698 LINK histogram_perf 00:16:21.698 LINK jsoncat 00:16:21.698 CC app/vhost/vhost.o 00:16:21.698 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:21.698 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:21.698 CXX test/cpp_headers/bit_array.o 00:16:21.698 CC test/app/stub/stub.o 00:16:21.698 CXX test/cpp_headers/bit_pool.o 00:16:21.698 LINK vhost 00:16:21.698 LINK idxd_perf 00:16:21.956 CC examples/nvme/reconnect/reconnect.o 00:16:21.956 CC examples/nvme/hello_world/hello_world.o 00:16:21.956 CXX test/cpp_headers/blob_bdev.o 00:16:21.956 LINK stub 00:16:21.956 CC test/event/event_perf/event_perf.o 00:16:21.956 CC app/spdk_dd/spdk_dd.o 00:16:21.956 CC test/nvme/aer/aer.o 00:16:21.956 CXX test/cpp_headers/blobfs_bdev.o 00:16:22.213 CC test/rpc_client/rpc_client_test.o 00:16:22.213 LINK hello_world 00:16:22.213 LINK memory_ut 00:16:22.213 LINK vhost_fuzz 00:16:22.213 LINK event_perf 00:16:22.213 LINK reconnect 00:16:22.213 CXX test/cpp_headers/blobfs.o 00:16:22.213 LINK rpc_client_test 00:16:22.213 CC test/event/reactor/reactor.o 00:16:22.213 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:22.213 CC examples/nvme/arbitration/arbitration.o 00:16:22.481 CC examples/nvme/hotplug/hotplug.o 00:16:22.481 LINK aer 00:16:22.481 CXX test/cpp_headers/blob.o 00:16:22.481 CC test/accel/dif/dif.o 00:16:22.481 LINK reactor 00:16:22.481 LINK spdk_dd 00:16:22.481 CXX test/cpp_headers/conf.o 00:16:22.481 CC app/fio/nvme/fio_plugin.o 00:16:22.481 CC test/nvme/reset/reset.o 00:16:22.481 CC test/event/reactor_perf/reactor_perf.o 00:16:22.739 CXX test/cpp_headers/config.o 00:16:22.740 LINK hotplug 00:16:22.740 CXX test/cpp_headers/cpuset.o 00:16:22.740 CC examples/nvme/cmb_copy/cmb_copy.o 00:16:22.740 LINK reactor_perf 00:16:22.740 LINK arbitration 00:16:22.740 CXX test/cpp_headers/crc16.o 00:16:22.740 LINK reset 00:16:22.740 LINK nvme_manage 00:16:22.740 LINK cmb_copy 00:16:22.740 CXX test/cpp_headers/crc32.o 00:16:22.740 LINK iscsi_fuzz 00:16:22.997 CC examples/nvme/abort/abort.o 00:16:22.997 CC test/event/app_repeat/app_repeat.o 00:16:22.997 CXX test/cpp_headers/crc64.o 00:16:22.997 CXX test/cpp_headers/dif.o 00:16:22.997 CC app/fio/bdev/fio_plugin.o 00:16:22.997 CC test/nvme/sgl/sgl.o 00:16:22.997 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:16:22.997 LINK app_repeat 00:16:22.997 CXX test/cpp_headers/dma.o 00:16:22.997 CC test/event/scheduler/scheduler.o 00:16:22.997 LINK spdk_nvme 00:16:22.997 LINK dif 00:16:23.255 CXX test/cpp_headers/endian.o 00:16:23.255 LINK pmr_persistence 00:16:23.255 CXX test/cpp_headers/env_dpdk.o 00:16:23.255 CXX test/cpp_headers/env.o 00:16:23.255 LINK abort 00:16:23.255 CXX test/cpp_headers/event.o 00:16:23.255 CXX test/cpp_headers/fd_group.o 00:16:23.255 LINK sgl 00:16:23.255 LINK scheduler 00:16:23.255 CXX test/cpp_headers/fd.o 00:16:23.255 CXX test/cpp_headers/file.o 00:16:23.255 CXX test/cpp_headers/fsdev.o 00:16:23.255 CC test/blobfs/mkfs/mkfs.o 00:16:23.512 CXX test/cpp_headers/fsdev_module.o 00:16:23.512 CXX test/cpp_headers/ftl.o 00:16:23.512 CXX test/cpp_headers/gpt_spec.o 00:16:23.512 CXX test/cpp_headers/hexlify.o 00:16:23.512 CC test/nvme/e2edp/nvme_dp.o 00:16:23.512 LINK spdk_bdev 00:16:23.512 CXX test/cpp_headers/histogram_data.o 00:16:23.513 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:23.513 LINK mkfs 00:16:23.513 CC test/lvol/esnap/esnap.o 00:16:23.770 CXX test/cpp_headers/idxd.o 00:16:23.770 CC test/nvme/err_injection/err_injection.o 00:16:23.770 CC test/nvme/overhead/overhead.o 00:16:23.770 CC examples/accel/perf/accel_perf.o 00:16:23.770 CC test/bdev/bdevio/bdevio.o 00:16:23.770 LINK nvme_dp 00:16:23.770 CC examples/blob/hello_world/hello_blob.o 00:16:23.770 LINK hello_fsdev 00:16:23.770 CC test/nvme/startup/startup.o 00:16:23.770 CXX test/cpp_headers/idxd_spec.o 00:16:23.770 LINK err_injection 00:16:23.770 CC test/nvme/reserve/reserve.o 00:16:23.770 LINK overhead 00:16:24.027 LINK hello_blob 00:16:24.027 LINK startup 00:16:24.027 CXX test/cpp_headers/init.o 00:16:24.027 CC test/nvme/simple_copy/simple_copy.o 00:16:24.027 CC examples/blob/cli/blobcli.o 00:16:24.027 LINK reserve 00:16:24.027 CXX test/cpp_headers/ioat.o 00:16:24.027 CC test/nvme/connect_stress/connect_stress.o 00:16:24.027 CC test/nvme/boot_partition/boot_partition.o 00:16:24.027 LINK bdevio 00:16:24.027 CC test/nvme/compliance/nvme_compliance.o 00:16:24.284 CXX test/cpp_headers/ioat_spec.o 00:16:24.284 LINK simple_copy 00:16:24.284 LINK connect_stress 00:16:24.284 LINK boot_partition 00:16:24.284 CC test/nvme/fused_ordering/fused_ordering.o 00:16:24.284 LINK accel_perf 00:16:24.284 CXX test/cpp_headers/iscsi_spec.o 00:16:24.284 CXX test/cpp_headers/json.o 00:16:24.284 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:24.284 CXX test/cpp_headers/jsonrpc.o 00:16:24.284 CXX test/cpp_headers/keyring.o 00:16:24.284 CC test/nvme/fdp/fdp.o 00:16:24.285 LINK blobcli 00:16:24.285 LINK fused_ordering 00:16:24.542 CC test/nvme/cuse/cuse.o 00:16:24.542 LINK nvme_compliance 00:16:24.542 CXX test/cpp_headers/keyring_module.o 00:16:24.542 LINK doorbell_aers 00:16:24.542 CXX test/cpp_headers/likely.o 00:16:24.542 CXX test/cpp_headers/log.o 00:16:24.542 CXX test/cpp_headers/lvol.o 00:16:24.542 CXX test/cpp_headers/md5.o 00:16:24.542 CXX test/cpp_headers/memory.o 00:16:24.542 LINK fdp 00:16:24.542 CXX test/cpp_headers/mmio.o 00:16:24.542 CXX test/cpp_headers/nbd.o 00:16:24.542 CC examples/bdev/hello_world/hello_bdev.o 00:16:24.542 CXX test/cpp_headers/net.o 00:16:24.799 CXX test/cpp_headers/notify.o 00:16:24.799 CC examples/bdev/bdevperf/bdevperf.o 00:16:24.799 CXX test/cpp_headers/nvme.o 00:16:24.799 CXX test/cpp_headers/nvme_intel.o 00:16:24.799 CXX test/cpp_headers/nvme_ocssd.o 00:16:24.799 CXX test/cpp_headers/nvme_ocssd_spec.o 00:16:24.799 CXX test/cpp_headers/nvme_spec.o 00:16:24.799 CXX test/cpp_headers/nvme_zns.o 00:16:24.799 CXX test/cpp_headers/nvmf_cmd.o 00:16:24.799 CXX test/cpp_headers/nvmf_fc_spec.o 00:16:24.799 LINK hello_bdev 00:16:25.056 CXX test/cpp_headers/nvmf.o 00:16:25.056 CXX test/cpp_headers/nvmf_spec.o 00:16:25.056 CXX test/cpp_headers/nvmf_transport.o 00:16:25.056 CXX test/cpp_headers/opal.o 00:16:25.056 CXX test/cpp_headers/opal_spec.o 00:16:25.056 CXX test/cpp_headers/pci_ids.o 00:16:25.056 CXX test/cpp_headers/pipe.o 00:16:25.056 CXX test/cpp_headers/queue.o 00:16:25.056 CXX test/cpp_headers/reduce.o 00:16:25.056 CXX test/cpp_headers/rpc.o 00:16:25.056 CXX test/cpp_headers/scheduler.o 00:16:25.056 CXX test/cpp_headers/scsi.o 00:16:25.056 CXX test/cpp_headers/scsi_spec.o 00:16:25.056 CXX test/cpp_headers/sock.o 00:16:25.315 CXX test/cpp_headers/stdinc.o 00:16:25.315 CXX test/cpp_headers/string.o 00:16:25.315 CXX test/cpp_headers/thread.o 00:16:25.315 CXX test/cpp_headers/trace.o 00:16:25.315 CXX test/cpp_headers/trace_parser.o 00:16:25.315 CXX test/cpp_headers/tree.o 00:16:25.315 CXX test/cpp_headers/ublk.o 00:16:25.315 CXX test/cpp_headers/util.o 00:16:25.315 LINK bdevperf 00:16:25.315 CXX test/cpp_headers/uuid.o 00:16:25.315 CXX test/cpp_headers/version.o 00:16:25.315 CXX test/cpp_headers/vfio_user_pci.o 00:16:25.315 CXX test/cpp_headers/vfio_user_spec.o 00:16:25.315 CXX test/cpp_headers/vhost.o 00:16:25.315 CXX test/cpp_headers/vmd.o 00:16:25.315 CXX test/cpp_headers/xor.o 00:16:25.315 CXX test/cpp_headers/zipf.o 00:16:25.575 LINK cuse 00:16:25.833 CC examples/nvmf/nvmf/nvmf.o 00:16:26.091 LINK nvmf 00:16:28.617 LINK esnap 00:16:28.875 00:16:28.875 real 1m9.687s 00:16:28.875 user 6m19.104s 00:16:28.875 sys 1m6.563s 00:16:28.875 22:58:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:16:28.875 22:58:03 make -- common/autotest_common.sh@10 -- $ set +x 00:16:28.875 ************************************ 00:16:28.875 END TEST make 00:16:28.875 ************************************ 00:16:28.875 22:58:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:16:28.875 22:58:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:28.875 22:58:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:28.875 22:58:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:28.875 22:58:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:16:28.875 22:58:04 -- pm/common@44 -- $ pid=5031 00:16:28.875 22:58:04 -- pm/common@50 -- $ kill -TERM 5031 00:16:28.875 22:58:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:28.875 22:58:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:28.875 22:58:04 -- pm/common@44 -- $ pid=5032 00:16:28.875 22:58:04 -- pm/common@50 -- $ kill -TERM 5032 00:16:28.875 22:58:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:16:28.875 22:58:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:16:28.875 22:58:04 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.875 22:58:04 -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.875 22:58:04 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.875 22:58:04 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.875 22:58:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.875 22:58:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.875 22:58:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.875 22:58:04 -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.875 22:58:04 -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.875 22:58:04 -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.875 22:58:04 -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.875 22:58:04 -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.875 22:58:04 -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.875 22:58:04 -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.875 22:58:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.875 22:58:04 -- scripts/common.sh@344 -- # case "$op" in 00:16:28.875 22:58:04 -- scripts/common.sh@345 -- # : 1 00:16:28.875 22:58:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.875 22:58:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.875 22:58:04 -- scripts/common.sh@365 -- # decimal 1 00:16:28.875 22:58:04 -- scripts/common.sh@353 -- # local d=1 00:16:28.875 22:58:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.875 22:58:04 -- scripts/common.sh@355 -- # echo 1 00:16:28.875 22:58:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.875 22:58:04 -- scripts/common.sh@366 -- # decimal 2 00:16:28.875 22:58:04 -- scripts/common.sh@353 -- # local d=2 00:16:28.875 22:58:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.875 22:58:04 -- scripts/common.sh@355 -- # echo 2 00:16:28.875 22:58:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.875 22:58:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.875 22:58:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.875 22:58:04 -- scripts/common.sh@368 -- # return 0 00:16:28.875 22:58:04 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.875 22:58:04 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.875 --rc genhtml_branch_coverage=1 00:16:28.875 --rc genhtml_function_coverage=1 00:16:28.875 --rc genhtml_legend=1 00:16:28.875 --rc geninfo_all_blocks=1 00:16:28.875 --rc geninfo_unexecuted_blocks=1 00:16:28.875 00:16:28.875 ' 00:16:28.876 22:58:04 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.876 --rc genhtml_branch_coverage=1 00:16:28.876 --rc genhtml_function_coverage=1 00:16:28.876 --rc genhtml_legend=1 00:16:28.876 --rc geninfo_all_blocks=1 00:16:28.876 --rc geninfo_unexecuted_blocks=1 00:16:28.876 00:16:28.876 ' 00:16:28.876 22:58:04 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.876 --rc genhtml_branch_coverage=1 00:16:28.876 --rc genhtml_function_coverage=1 00:16:28.876 --rc genhtml_legend=1 00:16:28.876 --rc geninfo_all_blocks=1 00:16:28.876 --rc geninfo_unexecuted_blocks=1 00:16:28.876 00:16:28.876 ' 00:16:28.876 22:58:04 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.876 --rc genhtml_branch_coverage=1 00:16:28.876 --rc genhtml_function_coverage=1 00:16:28.876 --rc genhtml_legend=1 00:16:28.876 --rc geninfo_all_blocks=1 00:16:28.876 --rc geninfo_unexecuted_blocks=1 00:16:28.876 00:16:28.876 ' 00:16:28.876 22:58:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.876 22:58:04 -- nvmf/common.sh@7 -- # uname -s 00:16:28.876 22:58:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.876 22:58:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.876 22:58:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.876 22:58:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.876 22:58:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.876 22:58:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.876 22:58:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.876 22:58:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.876 22:58:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.876 22:58:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.876 22:58:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f3f4faf8-991c-49df-aa98-6b75bac91fa9 00:16:28.876 22:58:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=f3f4faf8-991c-49df-aa98-6b75bac91fa9 00:16:28.876 22:58:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.876 22:58:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.876 22:58:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:28.876 22:58:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.876 22:58:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.876 22:58:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.876 22:58:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.876 22:58:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.876 22:58:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.876 22:58:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.876 22:58:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.876 22:58:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.876 22:58:04 -- paths/export.sh@5 -- # export PATH 00:16:28.876 22:58:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.876 22:58:04 -- nvmf/common.sh@51 -- # : 0 00:16:28.876 22:58:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.876 22:58:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.876 22:58:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.876 22:58:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.876 22:58:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.876 22:58:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.876 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.876 22:58:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.876 22:58:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.876 22:58:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.876 22:58:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:16:28.876 22:58:04 -- spdk/autotest.sh@32 -- # uname -s 00:16:28.876 22:58:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:16:28.876 22:58:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:16:28.876 22:58:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:28.876 22:58:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:16:28.876 22:58:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:28.876 22:58:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:16:28.876 22:58:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:16:28.876 22:58:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:16:28.876 22:58:04 -- spdk/autotest.sh@48 -- # udevadm_pid=53736 00:16:28.876 22:58:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:16:28.876 22:58:04 -- pm/common@17 -- # local monitor 00:16:28.876 22:58:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:28.876 22:58:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:28.876 22:58:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:16:28.876 22:58:04 -- pm/common@25 -- # sleep 1 00:16:28.876 22:58:04 -- pm/common@21 -- # date +%s 00:16:28.876 22:58:04 -- pm/common@21 -- # date +%s 00:16:28.876 22:58:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785084 00:16:28.876 22:58:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733785084 00:16:28.876 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785084_collect-cpu-load.pm.log 00:16:29.134 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733785084_collect-vmstat.pm.log 00:16:30.068 22:58:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:16:30.068 22:58:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:16:30.068 22:58:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.068 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:16:30.068 22:58:05 -- spdk/autotest.sh@59 -- # create_test_list 00:16:30.068 22:58:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:16:30.068 22:58:05 -- common/autotest_common.sh@10 -- # set +x 00:16:30.068 22:58:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:16:30.068 22:58:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:16:30.068 22:58:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:16:30.068 22:58:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:16:30.068 22:58:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:16:30.068 22:58:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:16:30.068 22:58:05 -- common/autotest_common.sh@1457 -- # uname 00:16:30.068 22:58:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:16:30.068 22:58:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:16:30.068 22:58:05 -- common/autotest_common.sh@1477 -- # uname 00:16:30.068 22:58:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:16:30.068 22:58:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:16:30.068 22:58:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:16:30.068 lcov: LCOV version 1.15 00:16:30.068 22:58:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:16:44.930 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:16:44.930 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:16:59.824 22:58:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:16:59.824 22:58:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.824 22:58:33 -- common/autotest_common.sh@10 -- # set +x 00:16:59.824 22:58:33 -- spdk/autotest.sh@78 -- # rm -f 00:16:59.824 22:58:33 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:59.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:59.824 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:59.824 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:59.824 22:58:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:16:59.824 22:58:34 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:59.824 22:58:34 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:59.824 22:58:34 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:16:59.824 22:58:34 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:16:59.824 22:58:34 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:16:59.824 22:58:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:59.824 22:58:34 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:16:59.824 22:58:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:59.824 22:58:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:16:59.824 22:58:34 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:59.824 22:58:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:16:59.824 22:58:34 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:16:59.824 22:58:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:59.824 22:58:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:16:59.824 22:58:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:59.824 22:58:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:59.824 22:58:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:16:59.824 22:58:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:16:59.824 22:58:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:16:59.824 22:58:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:16:59.824 22:58:34 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:16:59.824 22:58:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:59.824 22:58:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:59.824 22:58:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:16:59.824 22:58:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:59.824 22:58:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:59.824 22:58:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:16:59.824 22:58:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:59.824 22:58:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:59.824 No valid GPT data, bailing 00:16:59.824 22:58:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:59.824 22:58:34 -- scripts/common.sh@394 -- # pt= 00:16:59.824 22:58:34 -- scripts/common.sh@395 -- # return 1 00:16:59.824 22:58:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:16:59.824 1+0 records in 00:16:59.824 1+0 records out 00:16:59.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437669 s, 240 MB/s 00:16:59.824 22:58:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:59.824 22:58:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:59.824 22:58:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:16:59.824 22:58:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:16:59.824 22:58:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:16:59.824 No valid GPT data, bailing 00:16:59.824 22:58:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:59.824 22:58:34 -- scripts/common.sh@394 -- # pt= 00:16:59.824 22:58:34 -- scripts/common.sh@395 -- # return 1 00:16:59.824 22:58:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:16:59.824 1+0 records in 00:16:59.824 1+0 records out 00:16:59.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043669 s, 240 MB/s 00:16:59.824 22:58:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:59.824 22:58:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:59.824 22:58:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:16:59.824 22:58:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:16:59.824 22:58:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:16:59.824 No valid GPT data, bailing 00:16:59.824 22:58:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:16:59.824 22:58:34 -- scripts/common.sh@394 -- # pt= 00:16:59.824 22:58:34 -- scripts/common.sh@395 -- # return 1 00:16:59.824 22:58:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:16:59.824 1+0 records in 00:16:59.824 1+0 records out 00:16:59.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00281114 s, 373 MB/s 00:16:59.824 22:58:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:59.825 22:58:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:59.825 22:58:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:16:59.825 22:58:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:16:59.825 22:58:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:16:59.825 No valid GPT data, bailing 00:16:59.825 22:58:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:16:59.825 22:58:34 -- scripts/common.sh@394 -- # pt= 00:16:59.825 22:58:34 -- scripts/common.sh@395 -- # return 1 00:16:59.825 22:58:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:16:59.825 1+0 records in 00:16:59.825 1+0 records out 00:16:59.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449045 s, 234 MB/s 00:16:59.825 22:58:34 -- spdk/autotest.sh@105 -- # sync 00:16:59.825 22:58:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:59.825 22:58:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:59.825 22:58:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:00.761 22:58:36 -- spdk/autotest.sh@111 -- # uname -s 00:17:00.761 22:58:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:17:00.761 22:58:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:17:00.761 22:58:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:01.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.333 Hugepages 00:17:01.333 node hugesize free / total 00:17:01.333 node0 1048576kB 0 / 0 00:17:01.333 node0 2048kB 0 / 0 00:17:01.333 00:17:01.333 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:01.333 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:01.333 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:01.594 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:01.594 22:58:36 -- spdk/autotest.sh@117 -- # uname -s 00:17:01.594 22:58:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:17:01.594 22:58:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:17:01.594 22:58:36 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:01.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.115 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.115 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:02.115 22:58:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:17:03.049 22:58:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:17:03.049 22:58:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:17:03.049 22:58:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:17:03.049 22:58:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:17:03.049 22:58:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:03.049 22:58:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:03.049 22:58:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:03.049 22:58:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:03.049 22:58:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:03.308 22:58:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:03.308 22:58:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:03.308 22:58:38 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:03.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:03.566 Waiting for block devices as requested 00:17:03.566 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.566 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.566 22:58:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:03.566 22:58:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:17:03.566 22:58:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:17:03.566 22:58:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:17:03.566 22:58:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:17:03.566 22:58:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:03.566 22:58:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:03.566 22:58:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:03.566 22:58:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:03.566 22:58:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:03.566 22:58:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:17:03.566 22:58:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:03.566 22:58:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:03.824 22:58:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:03.824 22:58:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:03.824 22:58:38 -- common/autotest_common.sh@1543 -- # continue 00:17:03.824 22:58:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:03.824 22:58:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:03.824 22:58:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:17:03.824 22:58:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:17:03.824 22:58:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:17:03.824 22:58:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:03.824 22:58:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:03.824 22:58:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:03.824 22:58:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:03.824 22:58:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:03.824 22:58:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:03.824 22:58:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:17:03.824 22:58:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:03.824 22:58:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:03.824 22:58:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:03.824 22:58:38 -- common/autotest_common.sh@1543 -- # continue 00:17:03.824 22:58:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:03.824 22:58:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.824 22:58:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.824 22:58:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:03.824 22:58:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.824 22:58:38 -- common/autotest_common.sh@10 -- # set +x 00:17:03.824 22:58:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:04.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.389 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.389 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.389 22:58:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:04.389 22:58:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.389 22:58:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 22:58:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:04.389 22:58:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:17:04.389 22:58:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:17:04.389 22:58:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:17:04.389 22:58:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:17:04.389 22:58:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:17:04.389 22:58:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:17:04.389 22:58:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:17:04.389 22:58:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:04.389 22:58:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:04.389 22:58:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:04.389 22:58:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:04.389 22:58:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:04.389 22:58:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:04.389 22:58:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:04.389 22:58:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:04.389 22:58:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:17:04.389 22:58:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:04.389 22:58:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:04.389 22:58:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:04.389 22:58:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:17:04.389 22:58:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:04.389 22:58:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:04.389 22:58:39 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:17:04.389 22:58:39 -- common/autotest_common.sh@1572 -- # return 0 00:17:04.389 22:58:39 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:17:04.389 22:58:39 -- common/autotest_common.sh@1580 -- # return 0 00:17:04.389 22:58:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:17:04.389 22:58:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:17:04.389 22:58:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:04.389 22:58:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:04.389 22:58:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:17:04.389 22:58:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.389 22:58:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 22:58:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:17:04.389 22:58:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:04.389 22:58:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:04.389 22:58:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.389 22:58:39 -- common/autotest_common.sh@10 -- # set +x 00:17:04.389 ************************************ 00:17:04.389 START TEST env 00:17:04.389 ************************************ 00:17:04.389 22:58:39 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:04.647 * Looking for test storage... 00:17:04.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:04.647 22:58:39 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:04.647 22:58:39 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:04.647 22:58:39 env -- common/autotest_common.sh@1711 -- # lcov --version 00:17:04.647 22:58:39 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:04.647 22:58:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.647 22:58:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.647 22:58:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.647 22:58:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.647 22:58:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.647 22:58:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.647 22:58:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.647 22:58:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.647 22:58:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.647 22:58:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.647 22:58:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.647 22:58:39 env -- scripts/common.sh@344 -- # case "$op" in 00:17:04.647 22:58:39 env -- scripts/common.sh@345 -- # : 1 00:17:04.647 22:58:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.647 22:58:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.647 22:58:39 env -- scripts/common.sh@365 -- # decimal 1 00:17:04.647 22:58:39 env -- scripts/common.sh@353 -- # local d=1 00:17:04.647 22:58:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.647 22:58:39 env -- scripts/common.sh@355 -- # echo 1 00:17:04.647 22:58:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.647 22:58:39 env -- scripts/common.sh@366 -- # decimal 2 00:17:04.648 22:58:39 env -- scripts/common.sh@353 -- # local d=2 00:17:04.648 22:58:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.648 22:58:39 env -- scripts/common.sh@355 -- # echo 2 00:17:04.648 22:58:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.648 22:58:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.648 22:58:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.648 22:58:39 env -- scripts/common.sh@368 -- # return 0 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:04.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.648 --rc genhtml_branch_coverage=1 00:17:04.648 --rc genhtml_function_coverage=1 00:17:04.648 --rc genhtml_legend=1 00:17:04.648 --rc geninfo_all_blocks=1 00:17:04.648 --rc geninfo_unexecuted_blocks=1 00:17:04.648 00:17:04.648 ' 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:04.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.648 --rc genhtml_branch_coverage=1 00:17:04.648 --rc genhtml_function_coverage=1 00:17:04.648 --rc genhtml_legend=1 00:17:04.648 --rc geninfo_all_blocks=1 00:17:04.648 --rc geninfo_unexecuted_blocks=1 00:17:04.648 00:17:04.648 ' 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:04.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.648 --rc genhtml_branch_coverage=1 00:17:04.648 --rc genhtml_function_coverage=1 00:17:04.648 --rc genhtml_legend=1 00:17:04.648 --rc geninfo_all_blocks=1 00:17:04.648 --rc geninfo_unexecuted_blocks=1 00:17:04.648 00:17:04.648 ' 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:04.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.648 --rc genhtml_branch_coverage=1 00:17:04.648 --rc genhtml_function_coverage=1 00:17:04.648 --rc genhtml_legend=1 00:17:04.648 --rc geninfo_all_blocks=1 00:17:04.648 --rc geninfo_unexecuted_blocks=1 00:17:04.648 00:17:04.648 ' 00:17:04.648 22:58:39 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:04.648 22:58:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.648 22:58:39 env -- common/autotest_common.sh@10 -- # set +x 00:17:04.648 ************************************ 00:17:04.648 START TEST env_memory 00:17:04.648 ************************************ 00:17:04.648 22:58:39 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:04.648 00:17:04.648 00:17:04.648 CUnit - A unit testing framework for C - Version 2.1-3 00:17:04.648 http://cunit.sourceforge.net/ 00:17:04.648 00:17:04.648 00:17:04.648 Suite: memory 00:17:04.648 Test: alloc and free memory map ...[2024-12-09 22:58:39.940405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:04.648 passed 00:17:04.648 Test: mem map translation ...[2024-12-09 22:58:39.979215] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:04.648 [2024-12-09 22:58:39.979275] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:04.648 [2024-12-09 22:58:39.979335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:04.648 [2024-12-09 22:58:39.979350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:04.905 passed 00:17:04.905 Test: mem map registration ...[2024-12-09 22:58:40.048397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:17:04.905 [2024-12-09 22:58:40.048467] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:17:04.905 passed 00:17:04.905 Test: mem map adjacent registrations ...passed 00:17:04.905 00:17:04.905 Run Summary: Type Total Ran Passed Failed Inactive 00:17:04.905 suites 1 1 n/a 0 0 00:17:04.905 tests 4 4 4 0 0 00:17:04.905 asserts 152 152 152 0 n/a 00:17:04.905 00:17:04.905 Elapsed time = 0.235 seconds 00:17:04.905 00:17:04.905 real 0m0.264s 00:17:04.905 user 0m0.241s 00:17:04.905 sys 0m0.017s 00:17:04.905 22:58:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.905 22:58:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:04.905 ************************************ 00:17:04.905 END TEST env_memory 00:17:04.905 ************************************ 00:17:04.905 22:58:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:04.905 22:58:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:04.905 22:58:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.905 22:58:40 env -- common/autotest_common.sh@10 -- # set +x 00:17:04.905 ************************************ 00:17:04.905 START TEST env_vtophys 00:17:04.905 ************************************ 00:17:04.905 22:58:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:04.905 EAL: lib.eal log level changed from notice to debug 00:17:04.905 EAL: Detected lcore 0 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 1 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 2 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 3 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 4 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 5 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 6 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 7 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 8 as core 0 on socket 0 00:17:04.905 EAL: Detected lcore 9 as core 0 on socket 0 00:17:04.905 EAL: Maximum logical cores by configuration: 128 00:17:04.905 EAL: Detected CPU lcores: 10 00:17:04.905 EAL: Detected NUMA nodes: 1 00:17:04.905 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:04.905 EAL: Detected shared linkage of DPDK 00:17:04.905 EAL: No shared files mode enabled, IPC will be disabled 00:17:04.905 EAL: Selected IOVA mode 'PA' 00:17:04.905 EAL: Probing VFIO support... 00:17:04.905 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:04.905 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:04.905 EAL: Ask a virtual area of 0x2e000 bytes 00:17:04.905 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:04.905 EAL: Setting up physically contiguous memory... 00:17:04.905 EAL: Setting maximum number of open files to 524288 00:17:04.905 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:04.905 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:04.905 EAL: Ask a virtual area of 0x61000 bytes 00:17:04.905 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:04.905 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:04.905 EAL: Ask a virtual area of 0x400000000 bytes 00:17:04.905 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:04.905 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:04.905 EAL: Ask a virtual area of 0x61000 bytes 00:17:04.905 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:04.905 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:04.905 EAL: Ask a virtual area of 0x400000000 bytes 00:17:04.905 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:04.905 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:04.905 EAL: Ask a virtual area of 0x61000 bytes 00:17:04.905 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:04.905 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:04.905 EAL: Ask a virtual area of 0x400000000 bytes 00:17:04.905 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:04.905 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:04.905 EAL: Ask a virtual area of 0x61000 bytes 00:17:04.905 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:04.905 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:04.905 EAL: Ask a virtual area of 0x400000000 bytes 00:17:04.905 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:04.905 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:04.905 EAL: Hugepages will be freed exactly as allocated. 00:17:04.905 EAL: No shared files mode enabled, IPC is disabled 00:17:04.905 EAL: No shared files mode enabled, IPC is disabled 00:17:05.163 EAL: TSC frequency is ~2600000 KHz 00:17:05.163 EAL: Main lcore 0 is ready (tid=7f97856e3a40;cpuset=[0]) 00:17:05.163 EAL: Trying to obtain current memory policy. 00:17:05.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.163 EAL: Restoring previous memory policy: 0 00:17:05.163 EAL: request: mp_malloc_sync 00:17:05.163 EAL: No shared files mode enabled, IPC is disabled 00:17:05.163 EAL: Heap on socket 0 was expanded by 2MB 00:17:05.163 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:05.163 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:05.163 EAL: Mem event callback 'spdk:(nil)' registered 00:17:05.163 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:05.163 00:17:05.163 00:17:05.163 CUnit - A unit testing framework for C - Version 2.1-3 00:17:05.163 http://cunit.sourceforge.net/ 00:17:05.163 00:17:05.163 00:17:05.163 Suite: components_suite 00:17:05.421 Test: vtophys_malloc_test ...passed 00:17:05.421 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:05.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.421 EAL: Restoring previous memory policy: 4 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was expanded by 4MB 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was shrunk by 4MB 00:17:05.421 EAL: Trying to obtain current memory policy. 00:17:05.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.421 EAL: Restoring previous memory policy: 4 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was expanded by 6MB 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was shrunk by 6MB 00:17:05.421 EAL: Trying to obtain current memory policy. 00:17:05.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.421 EAL: Restoring previous memory policy: 4 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was expanded by 10MB 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was shrunk by 10MB 00:17:05.421 EAL: Trying to obtain current memory policy. 00:17:05.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.421 EAL: Restoring previous memory policy: 4 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was expanded by 18MB 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was shrunk by 18MB 00:17:05.421 EAL: Trying to obtain current memory policy. 00:17:05.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.421 EAL: Restoring previous memory policy: 4 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was expanded by 34MB 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was shrunk by 34MB 00:17:05.421 EAL: Trying to obtain current memory policy. 00:17:05.421 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.421 EAL: Restoring previous memory policy: 4 00:17:05.421 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.421 EAL: request: mp_malloc_sync 00:17:05.421 EAL: No shared files mode enabled, IPC is disabled 00:17:05.421 EAL: Heap on socket 0 was expanded by 66MB 00:17:05.678 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.678 EAL: request: mp_malloc_sync 00:17:05.678 EAL: No shared files mode enabled, IPC is disabled 00:17:05.678 EAL: Heap on socket 0 was shrunk by 66MB 00:17:05.678 EAL: Trying to obtain current memory policy. 00:17:05.678 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.678 EAL: Restoring previous memory policy: 4 00:17:05.678 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.679 EAL: request: mp_malloc_sync 00:17:05.679 EAL: No shared files mode enabled, IPC is disabled 00:17:05.679 EAL: Heap on socket 0 was expanded by 130MB 00:17:05.679 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.679 EAL: request: mp_malloc_sync 00:17:05.679 EAL: No shared files mode enabled, IPC is disabled 00:17:05.679 EAL: Heap on socket 0 was shrunk by 130MB 00:17:05.936 EAL: Trying to obtain current memory policy. 00:17:05.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:05.936 EAL: Restoring previous memory policy: 4 00:17:05.936 EAL: Calling mem event callback 'spdk:(nil)' 00:17:05.936 EAL: request: mp_malloc_sync 00:17:05.936 EAL: No shared files mode enabled, IPC is disabled 00:17:05.936 EAL: Heap on socket 0 was expanded by 258MB 00:17:06.193 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.193 EAL: request: mp_malloc_sync 00:17:06.193 EAL: No shared files mode enabled, IPC is disabled 00:17:06.193 EAL: Heap on socket 0 was shrunk by 258MB 00:17:06.451 EAL: Trying to obtain current memory policy. 00:17:06.451 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.451 EAL: Restoring previous memory policy: 4 00:17:06.451 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.451 EAL: request: mp_malloc_sync 00:17:06.451 EAL: No shared files mode enabled, IPC is disabled 00:17:06.451 EAL: Heap on socket 0 was expanded by 514MB 00:17:07.016 EAL: Calling mem event callback 'spdk:(nil)' 00:17:07.016 EAL: request: mp_malloc_sync 00:17:07.016 EAL: No shared files mode enabled, IPC is disabled 00:17:07.016 EAL: Heap on socket 0 was shrunk by 514MB 00:17:07.275 EAL: Trying to obtain current memory policy. 00:17:07.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:07.533 EAL: Restoring previous memory policy: 4 00:17:07.533 EAL: Calling mem event callback 'spdk:(nil)' 00:17:07.533 EAL: request: mp_malloc_sync 00:17:07.533 EAL: No shared files mode enabled, IPC is disabled 00:17:07.533 EAL: Heap on socket 0 was expanded by 1026MB 00:17:08.466 EAL: Calling mem event callback 'spdk:(nil)' 00:17:08.466 EAL: request: mp_malloc_sync 00:17:08.466 EAL: No shared files mode enabled, IPC is disabled 00:17:08.466 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:09.399 passed 00:17:09.399 00:17:09.399 Run Summary: Type Total Ran Passed Failed Inactive 00:17:09.399 suites 1 1 n/a 0 0 00:17:09.399 tests 2 2 2 0 0 00:17:09.399 asserts 5859 5859 5859 0 n/a 00:17:09.399 00:17:09.399 Elapsed time = 4.091 seconds 00:17:09.399 EAL: Calling mem event callback 'spdk:(nil)' 00:17:09.399 EAL: request: mp_malloc_sync 00:17:09.399 EAL: No shared files mode enabled, IPC is disabled 00:17:09.399 EAL: Heap on socket 0 was shrunk by 2MB 00:17:09.399 EAL: No shared files mode enabled, IPC is disabled 00:17:09.399 EAL: No shared files mode enabled, IPC is disabled 00:17:09.399 EAL: No shared files mode enabled, IPC is disabled 00:17:09.399 00:17:09.399 real 0m4.340s 00:17:09.399 user 0m3.593s 00:17:09.399 sys 0m0.610s 00:17:09.399 22:58:44 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.399 22:58:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:09.399 ************************************ 00:17:09.399 END TEST env_vtophys 00:17:09.399 ************************************ 00:17:09.399 22:58:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:09.399 22:58:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.399 22:58:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.399 22:58:44 env -- common/autotest_common.sh@10 -- # set +x 00:17:09.399 ************************************ 00:17:09.399 START TEST env_pci 00:17:09.399 ************************************ 00:17:09.399 22:58:44 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:09.399 00:17:09.399 00:17:09.400 CUnit - A unit testing framework for C - Version 2.1-3 00:17:09.400 http://cunit.sourceforge.net/ 00:17:09.400 00:17:09.400 00:17:09.400 Suite: pci 00:17:09.400 Test: pci_hook ...[2024-12-09 22:58:44.591532] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55951 has claimed it 00:17:09.400 passed 00:17:09.400 00:17:09.400 Run Summary: Type Total Ran Passed Failed Inactive 00:17:09.400 suites 1 1 n/a 0 0 00:17:09.400 tests 1 1 1 0 0 00:17:09.400 asserts 25 25 25 0 n/a 00:17:09.400 00:17:09.400 Elapsed time = 0.004 secondsEAL: Cannot find device (10000:00:01.0) 00:17:09.400 EAL: Failed to attach device on primary process 00:17:09.400 00:17:09.400 00:17:09.400 real 0m0.060s 00:17:09.400 user 0m0.031s 00:17:09.400 sys 0m0.029s 00:17:09.400 22:58:44 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.400 22:58:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:09.400 ************************************ 00:17:09.400 END TEST env_pci 00:17:09.400 ************************************ 00:17:09.400 22:58:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:09.400 22:58:44 env -- env/env.sh@15 -- # uname 00:17:09.400 22:58:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:09.400 22:58:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:09.400 22:58:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:09.400 22:58:44 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:09.400 22:58:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.400 22:58:44 env -- common/autotest_common.sh@10 -- # set +x 00:17:09.400 ************************************ 00:17:09.400 START TEST env_dpdk_post_init 00:17:09.400 ************************************ 00:17:09.400 22:58:44 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:09.400 EAL: Detected CPU lcores: 10 00:17:09.400 EAL: Detected NUMA nodes: 1 00:17:09.400 EAL: Detected shared linkage of DPDK 00:17:09.400 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:09.400 EAL: Selected IOVA mode 'PA' 00:17:09.658 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:09.658 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:09.658 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:17:09.658 Starting DPDK initialization... 00:17:09.658 Starting SPDK post initialization... 00:17:09.658 SPDK NVMe probe 00:17:09.658 Attaching to 0000:00:10.0 00:17:09.658 Attaching to 0000:00:11.0 00:17:09.658 Attached to 0000:00:10.0 00:17:09.658 Attached to 0000:00:11.0 00:17:09.658 Cleaning up... 00:17:09.658 00:17:09.658 real 0m0.223s 00:17:09.658 user 0m0.061s 00:17:09.658 sys 0m0.062s 00:17:09.658 ************************************ 00:17:09.658 END TEST env_dpdk_post_init 00:17:09.658 ************************************ 00:17:09.658 22:58:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.658 22:58:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:09.658 22:58:44 env -- env/env.sh@26 -- # uname 00:17:09.658 22:58:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:09.658 22:58:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:09.658 22:58:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.658 22:58:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.658 22:58:44 env -- common/autotest_common.sh@10 -- # set +x 00:17:09.658 ************************************ 00:17:09.658 START TEST env_mem_callbacks 00:17:09.658 ************************************ 00:17:09.658 22:58:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:09.658 EAL: Detected CPU lcores: 10 00:17:09.658 EAL: Detected NUMA nodes: 1 00:17:09.658 EAL: Detected shared linkage of DPDK 00:17:09.658 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:09.658 EAL: Selected IOVA mode 'PA' 00:17:09.915 00:17:09.915 00:17:09.915 CUnit - A unit testing framework for C - Version 2.1-3 00:17:09.915 http://cunit.sourceforge.net/ 00:17:09.915 00:17:09.915 00:17:09.915 Suite: memory 00:17:09.915 Test: test ... 00:17:09.915 register 0x200000200000 2097152 00:17:09.915 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:09.915 malloc 3145728 00:17:09.915 register 0x200000400000 4194304 00:17:09.915 buf 0x2000004fffc0 len 3145728 PASSED 00:17:09.915 malloc 64 00:17:09.915 buf 0x2000004ffec0 len 64 PASSED 00:17:09.915 malloc 4194304 00:17:09.915 register 0x200000800000 6291456 00:17:09.915 buf 0x2000009fffc0 len 4194304 PASSED 00:17:09.915 free 0x2000004fffc0 3145728 00:17:09.915 free 0x2000004ffec0 64 00:17:09.915 unregister 0x200000400000 4194304 PASSED 00:17:09.915 free 0x2000009fffc0 4194304 00:17:09.915 unregister 0x200000800000 6291456 PASSED 00:17:09.915 malloc 8388608 00:17:09.915 register 0x200000400000 10485760 00:17:09.915 buf 0x2000005fffc0 len 8388608 PASSED 00:17:09.915 free 0x2000005fffc0 8388608 00:17:09.915 unregister 0x200000400000 10485760 PASSED 00:17:09.915 passed 00:17:09.915 00:17:09.915 Run Summary: Type Total Ran Passed Failed Inactive 00:17:09.915 suites 1 1 n/a 0 0 00:17:09.915 tests 1 1 1 0 0 00:17:09.915 asserts 15 15 15 0 n/a 00:17:09.915 00:17:09.915 Elapsed time = 0.042 seconds 00:17:09.915 00:17:09.915 real 0m0.208s 00:17:09.915 user 0m0.059s 00:17:09.915 sys 0m0.048s 00:17:09.915 22:58:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.915 22:58:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:09.915 ************************************ 00:17:09.915 END TEST env_mem_callbacks 00:17:09.915 ************************************ 00:17:09.915 00:17:09.915 real 0m5.434s 00:17:09.915 user 0m4.131s 00:17:09.915 sys 0m0.958s 00:17:09.915 22:58:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.915 22:58:45 env -- common/autotest_common.sh@10 -- # set +x 00:17:09.915 ************************************ 00:17:09.915 END TEST env 00:17:09.915 ************************************ 00:17:09.915 22:58:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:09.915 22:58:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.915 22:58:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.915 22:58:45 -- common/autotest_common.sh@10 -- # set +x 00:17:09.915 ************************************ 00:17:09.915 START TEST rpc 00:17:09.915 ************************************ 00:17:09.915 22:58:45 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:09.915 * Looking for test storage... 00:17:09.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:09.915 22:58:45 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:09.915 22:58:45 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:09.915 22:58:45 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:10.189 22:58:45 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.189 22:58:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.189 22:58:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.189 22:58:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.189 22:58:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.189 22:58:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.189 22:58:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:10.189 22:58:45 rpc -- scripts/common.sh@345 -- # : 1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.189 22:58:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.189 22:58:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@353 -- # local d=1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.189 22:58:45 rpc -- scripts/common.sh@355 -- # echo 1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.189 22:58:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@353 -- # local d=2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.189 22:58:45 rpc -- scripts/common.sh@355 -- # echo 2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.189 22:58:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.189 22:58:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.189 22:58:45 rpc -- scripts/common.sh@368 -- # return 0 00:17:10.189 22:58:45 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.189 22:58:45 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.189 --rc genhtml_branch_coverage=1 00:17:10.189 --rc genhtml_function_coverage=1 00:17:10.189 --rc genhtml_legend=1 00:17:10.189 --rc geninfo_all_blocks=1 00:17:10.189 --rc geninfo_unexecuted_blocks=1 00:17:10.189 00:17:10.189 ' 00:17:10.189 22:58:45 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:10.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.189 --rc genhtml_branch_coverage=1 00:17:10.189 --rc genhtml_function_coverage=1 00:17:10.189 --rc genhtml_legend=1 00:17:10.189 --rc geninfo_all_blocks=1 00:17:10.190 --rc geninfo_unexecuted_blocks=1 00:17:10.190 00:17:10.190 ' 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.190 --rc genhtml_branch_coverage=1 00:17:10.190 --rc genhtml_function_coverage=1 00:17:10.190 --rc genhtml_legend=1 00:17:10.190 --rc geninfo_all_blocks=1 00:17:10.190 --rc geninfo_unexecuted_blocks=1 00:17:10.190 00:17:10.190 ' 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.190 --rc genhtml_branch_coverage=1 00:17:10.190 --rc genhtml_function_coverage=1 00:17:10.190 --rc genhtml_legend=1 00:17:10.190 --rc geninfo_all_blocks=1 00:17:10.190 --rc geninfo_unexecuted_blocks=1 00:17:10.190 00:17:10.190 ' 00:17:10.190 22:58:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56078 00:17:10.190 22:58:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:10.190 22:58:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56078 00:17:10.190 22:58:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 56078 ']' 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.190 22:58:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.190 [2024-12-09 22:58:45.428000] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:10.190 [2024-12-09 22:58:45.428140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56078 ] 00:17:10.446 [2024-12-09 22:58:45.579775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.446 [2024-12-09 22:58:45.683037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:10.446 [2024-12-09 22:58:45.683129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56078' to capture a snapshot of events at runtime. 00:17:10.446 [2024-12-09 22:58:45.683148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.446 [2024-12-09 22:58:45.683164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.446 [2024-12-09 22:58:45.683175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56078 for offline analysis/debug. 00:17:10.446 [2024-12-09 22:58:45.684298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.011 22:58:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.011 22:58:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:17:11.011 22:58:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:11.011 22:58:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:11.011 22:58:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:11.011 22:58:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:11.011 22:58:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.011 22:58:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.011 22:58:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.011 ************************************ 00:17:11.011 START TEST rpc_integrity 00:17:11.011 ************************************ 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:11.011 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.011 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.269 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:11.269 { 00:17:11.269 "name": "Malloc0", 00:17:11.269 "aliases": [ 00:17:11.269 "50aaffe2-0372-4e8b-9cd4-5810e84705ed" 00:17:11.269 ], 00:17:11.269 "product_name": "Malloc disk", 00:17:11.269 "block_size": 512, 00:17:11.269 "num_blocks": 16384, 00:17:11.269 "uuid": "50aaffe2-0372-4e8b-9cd4-5810e84705ed", 00:17:11.269 "assigned_rate_limits": { 00:17:11.269 "rw_ios_per_sec": 0, 00:17:11.269 "rw_mbytes_per_sec": 0, 00:17:11.269 "r_mbytes_per_sec": 0, 00:17:11.269 "w_mbytes_per_sec": 0 00:17:11.269 }, 00:17:11.269 "claimed": false, 00:17:11.269 "zoned": false, 00:17:11.269 "supported_io_types": { 00:17:11.269 "read": true, 00:17:11.269 "write": true, 00:17:11.269 "unmap": true, 00:17:11.269 "flush": true, 00:17:11.269 "reset": true, 00:17:11.269 "nvme_admin": false, 00:17:11.269 "nvme_io": false, 00:17:11.269 "nvme_io_md": false, 00:17:11.269 "write_zeroes": true, 00:17:11.269 "zcopy": true, 00:17:11.269 "get_zone_info": false, 00:17:11.269 "zone_management": false, 00:17:11.269 "zone_append": false, 00:17:11.269 "compare": false, 00:17:11.269 "compare_and_write": false, 00:17:11.269 "abort": true, 00:17:11.269 "seek_hole": false, 00:17:11.269 "seek_data": false, 00:17:11.269 "copy": true, 00:17:11.269 "nvme_iov_md": false 00:17:11.269 }, 00:17:11.269 "memory_domains": [ 00:17:11.269 { 00:17:11.269 "dma_device_id": "system", 00:17:11.269 "dma_device_type": 1 00:17:11.269 }, 00:17:11.269 { 00:17:11.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.269 "dma_device_type": 2 00:17:11.269 } 00:17:11.269 ], 00:17:11.269 "driver_specific": {} 00:17:11.269 } 00:17:11.269 ]' 00:17:11.269 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:11.269 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:11.269 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.269 [2024-12-09 22:58:46.419172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:11.269 [2024-12-09 22:58:46.419240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.269 [2024-12-09 22:58:46.419264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:11.269 [2024-12-09 22:58:46.419278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.269 [2024-12-09 22:58:46.421499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.269 [2024-12-09 22:58:46.421539] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:11.269 Passthru0 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.269 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.269 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.269 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:11.269 { 00:17:11.269 "name": "Malloc0", 00:17:11.269 "aliases": [ 00:17:11.269 "50aaffe2-0372-4e8b-9cd4-5810e84705ed" 00:17:11.269 ], 00:17:11.269 "product_name": "Malloc disk", 00:17:11.269 "block_size": 512, 00:17:11.269 "num_blocks": 16384, 00:17:11.269 "uuid": "50aaffe2-0372-4e8b-9cd4-5810e84705ed", 00:17:11.269 "assigned_rate_limits": { 00:17:11.269 "rw_ios_per_sec": 0, 00:17:11.269 "rw_mbytes_per_sec": 0, 00:17:11.269 "r_mbytes_per_sec": 0, 00:17:11.269 "w_mbytes_per_sec": 0 00:17:11.269 }, 00:17:11.269 "claimed": true, 00:17:11.269 "claim_type": "exclusive_write", 00:17:11.269 "zoned": false, 00:17:11.269 "supported_io_types": { 00:17:11.269 "read": true, 00:17:11.269 "write": true, 00:17:11.269 "unmap": true, 00:17:11.269 "flush": true, 00:17:11.269 "reset": true, 00:17:11.269 "nvme_admin": false, 00:17:11.269 "nvme_io": false, 00:17:11.269 "nvme_io_md": false, 00:17:11.269 "write_zeroes": true, 00:17:11.269 "zcopy": true, 00:17:11.269 "get_zone_info": false, 00:17:11.269 "zone_management": false, 00:17:11.269 "zone_append": false, 00:17:11.269 "compare": false, 00:17:11.269 "compare_and_write": false, 00:17:11.269 "abort": true, 00:17:11.269 "seek_hole": false, 00:17:11.269 "seek_data": false, 00:17:11.269 "copy": true, 00:17:11.269 "nvme_iov_md": false 00:17:11.269 }, 00:17:11.269 "memory_domains": [ 00:17:11.269 { 00:17:11.269 "dma_device_id": "system", 00:17:11.270 "dma_device_type": 1 00:17:11.270 }, 00:17:11.270 { 00:17:11.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.270 "dma_device_type": 2 00:17:11.270 } 00:17:11.270 ], 00:17:11.270 "driver_specific": {} 00:17:11.270 }, 00:17:11.270 { 00:17:11.270 "name": "Passthru0", 00:17:11.270 "aliases": [ 00:17:11.270 "7f71fb5a-4679-5c64-aa2e-88800f24d1ee" 00:17:11.270 ], 00:17:11.270 "product_name": "passthru", 00:17:11.270 "block_size": 512, 00:17:11.270 "num_blocks": 16384, 00:17:11.270 "uuid": "7f71fb5a-4679-5c64-aa2e-88800f24d1ee", 00:17:11.270 "assigned_rate_limits": { 00:17:11.270 "rw_ios_per_sec": 0, 00:17:11.270 "rw_mbytes_per_sec": 0, 00:17:11.270 "r_mbytes_per_sec": 0, 00:17:11.270 "w_mbytes_per_sec": 0 00:17:11.270 }, 00:17:11.270 "claimed": false, 00:17:11.270 "zoned": false, 00:17:11.270 "supported_io_types": { 00:17:11.270 "read": true, 00:17:11.270 "write": true, 00:17:11.270 "unmap": true, 00:17:11.270 "flush": true, 00:17:11.270 "reset": true, 00:17:11.270 "nvme_admin": false, 00:17:11.270 "nvme_io": false, 00:17:11.270 "nvme_io_md": false, 00:17:11.270 "write_zeroes": true, 00:17:11.270 "zcopy": true, 00:17:11.270 "get_zone_info": false, 00:17:11.270 "zone_management": false, 00:17:11.270 "zone_append": false, 00:17:11.270 "compare": false, 00:17:11.270 "compare_and_write": false, 00:17:11.270 "abort": true, 00:17:11.270 "seek_hole": false, 00:17:11.270 "seek_data": false, 00:17:11.270 "copy": true, 00:17:11.270 "nvme_iov_md": false 00:17:11.270 }, 00:17:11.270 "memory_domains": [ 00:17:11.270 { 00:17:11.270 "dma_device_id": "system", 00:17:11.270 "dma_device_type": 1 00:17:11.270 }, 00:17:11.270 { 00:17:11.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.270 "dma_device_type": 2 00:17:11.270 } 00:17:11.270 ], 00:17:11.270 "driver_specific": { 00:17:11.270 "passthru": { 00:17:11.270 "name": "Passthru0", 00:17:11.270 "base_bdev_name": "Malloc0" 00:17:11.270 } 00:17:11.270 } 00:17:11.270 } 00:17:11.270 ]' 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:11.270 22:58:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:11.270 00:17:11.270 real 0m0.237s 00:17:11.270 user 0m0.123s 00:17:11.270 sys 0m0.035s 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.270 ************************************ 00:17:11.270 22:58:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 END TEST rpc_integrity 00:17:11.270 ************************************ 00:17:11.270 22:58:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:11.270 22:58:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.270 22:58:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.270 22:58:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 ************************************ 00:17:11.270 START TEST rpc_plugins 00:17:11.270 ************************************ 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:17:11.270 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.270 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:11.270 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.270 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:11.270 { 00:17:11.270 "name": "Malloc1", 00:17:11.270 "aliases": [ 00:17:11.270 "b20a7599-0b09-4dec-89c8-e5b3ccde0f70" 00:17:11.270 ], 00:17:11.270 "product_name": "Malloc disk", 00:17:11.270 "block_size": 4096, 00:17:11.270 "num_blocks": 256, 00:17:11.270 "uuid": "b20a7599-0b09-4dec-89c8-e5b3ccde0f70", 00:17:11.270 "assigned_rate_limits": { 00:17:11.270 "rw_ios_per_sec": 0, 00:17:11.270 "rw_mbytes_per_sec": 0, 00:17:11.270 "r_mbytes_per_sec": 0, 00:17:11.270 "w_mbytes_per_sec": 0 00:17:11.270 }, 00:17:11.270 "claimed": false, 00:17:11.270 "zoned": false, 00:17:11.270 "supported_io_types": { 00:17:11.270 "read": true, 00:17:11.270 "write": true, 00:17:11.270 "unmap": true, 00:17:11.270 "flush": true, 00:17:11.270 "reset": true, 00:17:11.270 "nvme_admin": false, 00:17:11.270 "nvme_io": false, 00:17:11.270 "nvme_io_md": false, 00:17:11.270 "write_zeroes": true, 00:17:11.270 "zcopy": true, 00:17:11.270 "get_zone_info": false, 00:17:11.270 "zone_management": false, 00:17:11.270 "zone_append": false, 00:17:11.270 "compare": false, 00:17:11.270 "compare_and_write": false, 00:17:11.270 "abort": true, 00:17:11.270 "seek_hole": false, 00:17:11.270 "seek_data": false, 00:17:11.270 "copy": true, 00:17:11.270 "nvme_iov_md": false 00:17:11.270 }, 00:17:11.270 "memory_domains": [ 00:17:11.270 { 00:17:11.270 "dma_device_id": "system", 00:17:11.270 "dma_device_type": 1 00:17:11.270 }, 00:17:11.270 { 00:17:11.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.270 "dma_device_type": 2 00:17:11.270 } 00:17:11.270 ], 00:17:11.270 "driver_specific": {} 00:17:11.270 } 00:17:11.270 ]' 00:17:11.270 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:11.528 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:11.528 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.528 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.528 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:11.528 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:17:11.528 ************************************ 00:17:11.528 END TEST rpc_plugins 00:17:11.528 ************************************ 00:17:11.528 22:58:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:11.528 00:17:11.528 real 0m0.114s 00:17:11.528 user 0m0.064s 00:17:11.528 sys 0m0.015s 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.528 22:58:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 22:58:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:11.528 22:58:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.528 22:58:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.528 22:58:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 ************************************ 00:17:11.528 START TEST rpc_trace_cmd_test 00:17:11.528 ************************************ 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.528 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:17:11.528 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56078", 00:17:11.528 "tpoint_group_mask": "0x8", 00:17:11.528 "iscsi_conn": { 00:17:11.528 "mask": "0x2", 00:17:11.528 "tpoint_mask": "0x0" 00:17:11.528 }, 00:17:11.528 "scsi": { 00:17:11.528 "mask": "0x4", 00:17:11.528 "tpoint_mask": "0x0" 00:17:11.528 }, 00:17:11.528 "bdev": { 00:17:11.528 "mask": "0x8", 00:17:11.528 "tpoint_mask": "0xffffffffffffffff" 00:17:11.528 }, 00:17:11.528 "nvmf_rdma": { 00:17:11.528 "mask": "0x10", 00:17:11.528 "tpoint_mask": "0x0" 00:17:11.528 }, 00:17:11.528 "nvmf_tcp": { 00:17:11.528 "mask": "0x20", 00:17:11.528 "tpoint_mask": "0x0" 00:17:11.528 }, 00:17:11.528 "ftl": { 00:17:11.528 "mask": "0x40", 00:17:11.528 "tpoint_mask": "0x0" 00:17:11.528 }, 00:17:11.528 "blobfs": { 00:17:11.528 "mask": "0x80", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "dsa": { 00:17:11.529 "mask": "0x200", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "thread": { 00:17:11.529 "mask": "0x400", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "nvme_pcie": { 00:17:11.529 "mask": "0x800", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "iaa": { 00:17:11.529 "mask": "0x1000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "nvme_tcp": { 00:17:11.529 "mask": "0x2000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "bdev_nvme": { 00:17:11.529 "mask": "0x4000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "sock": { 00:17:11.529 "mask": "0x8000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "blob": { 00:17:11.529 "mask": "0x10000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "bdev_raid": { 00:17:11.529 "mask": "0x20000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 }, 00:17:11.529 "scheduler": { 00:17:11.529 "mask": "0x40000", 00:17:11.529 "tpoint_mask": "0x0" 00:17:11.529 } 00:17:11.529 }' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:11.529 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:11.786 ************************************ 00:17:11.786 END TEST rpc_trace_cmd_test 00:17:11.786 ************************************ 00:17:11.786 22:58:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:11.786 00:17:11.786 real 0m0.174s 00:17:11.786 user 0m0.137s 00:17:11.786 sys 0m0.026s 00:17:11.786 22:58:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.786 22:58:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.786 22:58:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:11.786 22:58:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:11.786 22:58:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:11.786 22:58:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.786 22:58:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.786 22:58:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.786 ************************************ 00:17:11.786 START TEST rpc_daemon_integrity 00:17:11.786 ************************************ 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:11.786 22:58:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:11.787 22:58:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.787 22:58:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:11.787 { 00:17:11.787 "name": "Malloc2", 00:17:11.787 "aliases": [ 00:17:11.787 "4b5b666c-21bf-4b15-a9b9-37d9847d006a" 00:17:11.787 ], 00:17:11.787 "product_name": "Malloc disk", 00:17:11.787 "block_size": 512, 00:17:11.787 "num_blocks": 16384, 00:17:11.787 "uuid": "4b5b666c-21bf-4b15-a9b9-37d9847d006a", 00:17:11.787 "assigned_rate_limits": { 00:17:11.787 "rw_ios_per_sec": 0, 00:17:11.787 "rw_mbytes_per_sec": 0, 00:17:11.787 "r_mbytes_per_sec": 0, 00:17:11.787 "w_mbytes_per_sec": 0 00:17:11.787 }, 00:17:11.787 "claimed": false, 00:17:11.787 "zoned": false, 00:17:11.787 "supported_io_types": { 00:17:11.787 "read": true, 00:17:11.787 "write": true, 00:17:11.787 "unmap": true, 00:17:11.787 "flush": true, 00:17:11.787 "reset": true, 00:17:11.787 "nvme_admin": false, 00:17:11.787 "nvme_io": false, 00:17:11.787 "nvme_io_md": false, 00:17:11.787 "write_zeroes": true, 00:17:11.787 "zcopy": true, 00:17:11.787 "get_zone_info": false, 00:17:11.787 "zone_management": false, 00:17:11.787 "zone_append": false, 00:17:11.787 "compare": false, 00:17:11.787 "compare_and_write": false, 00:17:11.787 "abort": true, 00:17:11.787 "seek_hole": false, 00:17:11.787 "seek_data": false, 00:17:11.787 "copy": true, 00:17:11.787 "nvme_iov_md": false 00:17:11.787 }, 00:17:11.787 "memory_domains": [ 00:17:11.787 { 00:17:11.787 "dma_device_id": "system", 00:17:11.787 "dma_device_type": 1 00:17:11.787 }, 00:17:11.787 { 00:17:11.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.787 "dma_device_type": 2 00:17:11.787 } 00:17:11.787 ], 00:17:11.787 "driver_specific": {} 00:17:11.787 } 00:17:11.787 ]' 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.787 [2024-12-09 22:58:47.059199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:11.787 [2024-12-09 22:58:47.059430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.787 [2024-12-09 22:58:47.059459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:11.787 [2024-12-09 22:58:47.059470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.787 [2024-12-09 22:58:47.061645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.787 [2024-12-09 22:58:47.061681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:11.787 Passthru0 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:11.787 { 00:17:11.787 "name": "Malloc2", 00:17:11.787 "aliases": [ 00:17:11.787 "4b5b666c-21bf-4b15-a9b9-37d9847d006a" 00:17:11.787 ], 00:17:11.787 "product_name": "Malloc disk", 00:17:11.787 "block_size": 512, 00:17:11.787 "num_blocks": 16384, 00:17:11.787 "uuid": "4b5b666c-21bf-4b15-a9b9-37d9847d006a", 00:17:11.787 "assigned_rate_limits": { 00:17:11.787 "rw_ios_per_sec": 0, 00:17:11.787 "rw_mbytes_per_sec": 0, 00:17:11.787 "r_mbytes_per_sec": 0, 00:17:11.787 "w_mbytes_per_sec": 0 00:17:11.787 }, 00:17:11.787 "claimed": true, 00:17:11.787 "claim_type": "exclusive_write", 00:17:11.787 "zoned": false, 00:17:11.787 "supported_io_types": { 00:17:11.787 "read": true, 00:17:11.787 "write": true, 00:17:11.787 "unmap": true, 00:17:11.787 "flush": true, 00:17:11.787 "reset": true, 00:17:11.787 "nvme_admin": false, 00:17:11.787 "nvme_io": false, 00:17:11.787 "nvme_io_md": false, 00:17:11.787 "write_zeroes": true, 00:17:11.787 "zcopy": true, 00:17:11.787 "get_zone_info": false, 00:17:11.787 "zone_management": false, 00:17:11.787 "zone_append": false, 00:17:11.787 "compare": false, 00:17:11.787 "compare_and_write": false, 00:17:11.787 "abort": true, 00:17:11.787 "seek_hole": false, 00:17:11.787 "seek_data": false, 00:17:11.787 "copy": true, 00:17:11.787 "nvme_iov_md": false 00:17:11.787 }, 00:17:11.787 "memory_domains": [ 00:17:11.787 { 00:17:11.787 "dma_device_id": "system", 00:17:11.787 "dma_device_type": 1 00:17:11.787 }, 00:17:11.787 { 00:17:11.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.787 "dma_device_type": 2 00:17:11.787 } 00:17:11.787 ], 00:17:11.787 "driver_specific": {} 00:17:11.787 }, 00:17:11.787 { 00:17:11.787 "name": "Passthru0", 00:17:11.787 "aliases": [ 00:17:11.787 "881fbff4-f780-5b05-a203-51407d33f5ed" 00:17:11.787 ], 00:17:11.787 "product_name": "passthru", 00:17:11.787 "block_size": 512, 00:17:11.787 "num_blocks": 16384, 00:17:11.787 "uuid": "881fbff4-f780-5b05-a203-51407d33f5ed", 00:17:11.787 "assigned_rate_limits": { 00:17:11.787 "rw_ios_per_sec": 0, 00:17:11.787 "rw_mbytes_per_sec": 0, 00:17:11.787 "r_mbytes_per_sec": 0, 00:17:11.787 "w_mbytes_per_sec": 0 00:17:11.787 }, 00:17:11.787 "claimed": false, 00:17:11.787 "zoned": false, 00:17:11.787 "supported_io_types": { 00:17:11.787 "read": true, 00:17:11.787 "write": true, 00:17:11.787 "unmap": true, 00:17:11.787 "flush": true, 00:17:11.787 "reset": true, 00:17:11.787 "nvme_admin": false, 00:17:11.787 "nvme_io": false, 00:17:11.787 "nvme_io_md": false, 00:17:11.787 "write_zeroes": true, 00:17:11.787 "zcopy": true, 00:17:11.787 "get_zone_info": false, 00:17:11.787 "zone_management": false, 00:17:11.787 "zone_append": false, 00:17:11.787 "compare": false, 00:17:11.787 "compare_and_write": false, 00:17:11.787 "abort": true, 00:17:11.787 "seek_hole": false, 00:17:11.787 "seek_data": false, 00:17:11.787 "copy": true, 00:17:11.787 "nvme_iov_md": false 00:17:11.787 }, 00:17:11.787 "memory_domains": [ 00:17:11.787 { 00:17:11.787 "dma_device_id": "system", 00:17:11.787 "dma_device_type": 1 00:17:11.787 }, 00:17:11.787 { 00:17:11.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.787 "dma_device_type": 2 00:17:11.787 } 00:17:11.787 ], 00:17:11.787 "driver_specific": { 00:17:11.787 "passthru": { 00:17:11.787 "name": "Passthru0", 00:17:11.787 "base_bdev_name": "Malloc2" 00:17:11.787 } 00:17:11.787 } 00:17:11.787 } 00:17:11.787 ]' 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.787 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:12.045 ************************************ 00:17:12.045 END TEST rpc_daemon_integrity 00:17:12.045 ************************************ 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:12.045 00:17:12.045 real 0m0.235s 00:17:12.045 user 0m0.117s 00:17:12.045 sys 0m0.037s 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.045 22:58:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 22:58:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:12.045 22:58:47 rpc -- rpc/rpc.sh@84 -- # killprocess 56078 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 56078 ']' 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@958 -- # kill -0 56078 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@959 -- # uname 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56078 00:17:12.045 killing process with pid 56078 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56078' 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@973 -- # kill 56078 00:17:12.045 22:58:47 rpc -- common/autotest_common.sh@978 -- # wait 56078 00:17:13.440 00:17:13.440 real 0m3.469s 00:17:13.440 user 0m3.853s 00:17:13.440 sys 0m0.628s 00:17:13.440 22:58:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.440 ************************************ 00:17:13.440 END TEST rpc 00:17:13.440 ************************************ 00:17:13.440 22:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 22:58:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:13.440 22:58:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:13.440 22:58:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.440 22:58:48 -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 ************************************ 00:17:13.440 START TEST skip_rpc 00:17:13.440 ************************************ 00:17:13.440 22:58:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:13.730 * Looking for test storage... 00:17:13.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.730 22:58:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:13.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.730 --rc genhtml_branch_coverage=1 00:17:13.730 --rc genhtml_function_coverage=1 00:17:13.730 --rc genhtml_legend=1 00:17:13.730 --rc geninfo_all_blocks=1 00:17:13.730 --rc geninfo_unexecuted_blocks=1 00:17:13.730 00:17:13.730 ' 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:13.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.730 --rc genhtml_branch_coverage=1 00:17:13.730 --rc genhtml_function_coverage=1 00:17:13.730 --rc genhtml_legend=1 00:17:13.730 --rc geninfo_all_blocks=1 00:17:13.730 --rc geninfo_unexecuted_blocks=1 00:17:13.730 00:17:13.730 ' 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:13.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.730 --rc genhtml_branch_coverage=1 00:17:13.730 --rc genhtml_function_coverage=1 00:17:13.730 --rc genhtml_legend=1 00:17:13.730 --rc geninfo_all_blocks=1 00:17:13.730 --rc geninfo_unexecuted_blocks=1 00:17:13.730 00:17:13.730 ' 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:13.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.730 --rc genhtml_branch_coverage=1 00:17:13.730 --rc genhtml_function_coverage=1 00:17:13.730 --rc genhtml_legend=1 00:17:13.730 --rc geninfo_all_blocks=1 00:17:13.730 --rc geninfo_unexecuted_blocks=1 00:17:13.730 00:17:13.730 ' 00:17:13.730 22:58:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:13.730 22:58:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:13.730 22:58:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.730 22:58:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.730 ************************************ 00:17:13.730 START TEST skip_rpc 00:17:13.730 ************************************ 00:17:13.730 22:58:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:17:13.730 22:58:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56285 00:17:13.730 22:58:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:13.730 22:58:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:13.730 22:58:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:13.730 [2024-12-09 22:58:48.953704] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:13.730 [2024-12-09 22:58:48.954040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56285 ] 00:17:13.989 [2024-12-09 22:58:49.116843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.989 [2024-12-09 22:58:49.212336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56285 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56285 ']' 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56285 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56285 00:17:19.250 killing process with pid 56285 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56285' 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56285 00:17:19.250 22:58:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56285 00:17:19.816 ************************************ 00:17:19.816 END TEST skip_rpc 00:17:19.816 ************************************ 00:17:19.816 00:17:19.816 real 0m6.250s 00:17:19.816 user 0m5.853s 00:17:19.816 sys 0m0.287s 00:17:19.816 22:58:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.816 22:58:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.816 22:58:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:19.817 22:58:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.817 22:58:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.817 22:58:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.817 ************************************ 00:17:19.817 START TEST skip_rpc_with_json 00:17:19.817 ************************************ 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56378 00:17:19.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56378 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56378 ']' 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.817 22:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:20.074 [2024-12-09 22:58:55.247482] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:20.074 [2024-12-09 22:58:55.247602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56378 ] 00:17:20.075 [2024-12-09 22:58:55.393940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.332 [2024-12-09 22:58:55.478447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:20.900 [2024-12-09 22:58:56.039520] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:20.900 request: 00:17:20.900 { 00:17:20.900 "trtype": "tcp", 00:17:20.900 "method": "nvmf_get_transports", 00:17:20.900 "req_id": 1 00:17:20.900 } 00:17:20.900 Got JSON-RPC error response 00:17:20.900 response: 00:17:20.900 { 00:17:20.900 "code": -19, 00:17:20.900 "message": "No such device" 00:17:20.900 } 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:20.900 [2024-12-09 22:58:56.051607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.900 22:58:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:20.900 { 00:17:20.900 "subsystems": [ 00:17:20.900 { 00:17:20.900 "subsystem": "fsdev", 00:17:20.900 "config": [ 00:17:20.900 { 00:17:20.900 "method": "fsdev_set_opts", 00:17:20.900 "params": { 00:17:20.900 "fsdev_io_pool_size": 65535, 00:17:20.900 "fsdev_io_cache_size": 256 00:17:20.900 } 00:17:20.900 } 00:17:20.900 ] 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "subsystem": "keyring", 00:17:20.900 "config": [] 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "subsystem": "iobuf", 00:17:20.900 "config": [ 00:17:20.900 { 00:17:20.900 "method": "iobuf_set_options", 00:17:20.900 "params": { 00:17:20.900 "small_pool_count": 8192, 00:17:20.900 "large_pool_count": 1024, 00:17:20.900 "small_bufsize": 8192, 00:17:20.900 "large_bufsize": 135168, 00:17:20.900 "enable_numa": false 00:17:20.900 } 00:17:20.900 } 00:17:20.900 ] 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "subsystem": "sock", 00:17:20.900 "config": [ 00:17:20.900 { 00:17:20.900 "method": "sock_set_default_impl", 00:17:20.900 "params": { 00:17:20.900 "impl_name": "posix" 00:17:20.900 } 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "method": "sock_impl_set_options", 00:17:20.900 "params": { 00:17:20.900 "impl_name": "ssl", 00:17:20.900 "recv_buf_size": 4096, 00:17:20.900 "send_buf_size": 4096, 00:17:20.900 "enable_recv_pipe": true, 00:17:20.900 "enable_quickack": false, 00:17:20.900 "enable_placement_id": 0, 00:17:20.900 "enable_zerocopy_send_server": true, 00:17:20.900 "enable_zerocopy_send_client": false, 00:17:20.900 "zerocopy_threshold": 0, 00:17:20.900 "tls_version": 0, 00:17:20.900 "enable_ktls": false 00:17:20.900 } 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "method": "sock_impl_set_options", 00:17:20.900 "params": { 00:17:20.900 "impl_name": "posix", 00:17:20.900 "recv_buf_size": 2097152, 00:17:20.900 "send_buf_size": 2097152, 00:17:20.900 "enable_recv_pipe": true, 00:17:20.900 "enable_quickack": false, 00:17:20.900 "enable_placement_id": 0, 00:17:20.900 "enable_zerocopy_send_server": true, 00:17:20.900 "enable_zerocopy_send_client": false, 00:17:20.900 "zerocopy_threshold": 0, 00:17:20.900 "tls_version": 0, 00:17:20.900 "enable_ktls": false 00:17:20.900 } 00:17:20.900 } 00:17:20.900 ] 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "subsystem": "vmd", 00:17:20.900 "config": [] 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "subsystem": "accel", 00:17:20.900 "config": [ 00:17:20.900 { 00:17:20.900 "method": "accel_set_options", 00:17:20.900 "params": { 00:17:20.900 "small_cache_size": 128, 00:17:20.900 "large_cache_size": 16, 00:17:20.900 "task_count": 2048, 00:17:20.900 "sequence_count": 2048, 00:17:20.900 "buf_count": 2048 00:17:20.900 } 00:17:20.900 } 00:17:20.900 ] 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "subsystem": "bdev", 00:17:20.900 "config": [ 00:17:20.900 { 00:17:20.900 "method": "bdev_set_options", 00:17:20.900 "params": { 00:17:20.900 "bdev_io_pool_size": 65535, 00:17:20.900 "bdev_io_cache_size": 256, 00:17:20.900 "bdev_auto_examine": true, 00:17:20.900 "iobuf_small_cache_size": 128, 00:17:20.900 "iobuf_large_cache_size": 16 00:17:20.900 } 00:17:20.900 }, 00:17:20.900 { 00:17:20.900 "method": "bdev_raid_set_options", 00:17:20.900 "params": { 00:17:20.900 "process_window_size_kb": 1024, 00:17:20.900 "process_max_bandwidth_mb_sec": 0 00:17:20.900 } 00:17:20.900 }, 00:17:20.900 { 00:17:20.901 "method": "bdev_iscsi_set_options", 00:17:20.901 "params": { 00:17:20.901 "timeout_sec": 30 00:17:20.901 } 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "method": "bdev_nvme_set_options", 00:17:20.901 "params": { 00:17:20.901 "action_on_timeout": "none", 00:17:20.901 "timeout_us": 0, 00:17:20.901 "timeout_admin_us": 0, 00:17:20.901 "keep_alive_timeout_ms": 10000, 00:17:20.901 "arbitration_burst": 0, 00:17:20.901 "low_priority_weight": 0, 00:17:20.901 "medium_priority_weight": 0, 00:17:20.901 "high_priority_weight": 0, 00:17:20.901 "nvme_adminq_poll_period_us": 10000, 00:17:20.901 "nvme_ioq_poll_period_us": 0, 00:17:20.901 "io_queue_requests": 0, 00:17:20.901 "delay_cmd_submit": true, 00:17:20.901 "transport_retry_count": 4, 00:17:20.901 "bdev_retry_count": 3, 00:17:20.901 "transport_ack_timeout": 0, 00:17:20.901 "ctrlr_loss_timeout_sec": 0, 00:17:20.901 "reconnect_delay_sec": 0, 00:17:20.901 "fast_io_fail_timeout_sec": 0, 00:17:20.901 "disable_auto_failback": false, 00:17:20.901 "generate_uuids": false, 00:17:20.901 "transport_tos": 0, 00:17:20.901 "nvme_error_stat": false, 00:17:20.901 "rdma_srq_size": 0, 00:17:20.901 "io_path_stat": false, 00:17:20.901 "allow_accel_sequence": false, 00:17:20.901 "rdma_max_cq_size": 0, 00:17:20.901 "rdma_cm_event_timeout_ms": 0, 00:17:20.901 "dhchap_digests": [ 00:17:20.901 "sha256", 00:17:20.901 "sha384", 00:17:20.901 "sha512" 00:17:20.901 ], 00:17:20.901 "dhchap_dhgroups": [ 00:17:20.901 "null", 00:17:20.901 "ffdhe2048", 00:17:20.901 "ffdhe3072", 00:17:20.901 "ffdhe4096", 00:17:20.901 "ffdhe6144", 00:17:20.901 "ffdhe8192" 00:17:20.901 ] 00:17:20.901 } 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "method": "bdev_nvme_set_hotplug", 00:17:20.901 "params": { 00:17:20.901 "period_us": 100000, 00:17:20.901 "enable": false 00:17:20.901 } 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "method": "bdev_wait_for_examine" 00:17:20.901 } 00:17:20.901 ] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "scsi", 00:17:20.901 "config": null 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "scheduler", 00:17:20.901 "config": [ 00:17:20.901 { 00:17:20.901 "method": "framework_set_scheduler", 00:17:20.901 "params": { 00:17:20.901 "name": "static" 00:17:20.901 } 00:17:20.901 } 00:17:20.901 ] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "vhost_scsi", 00:17:20.901 "config": [] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "vhost_blk", 00:17:20.901 "config": [] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "ublk", 00:17:20.901 "config": [] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "nbd", 00:17:20.901 "config": [] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "nvmf", 00:17:20.901 "config": [ 00:17:20.901 { 00:17:20.901 "method": "nvmf_set_config", 00:17:20.901 "params": { 00:17:20.901 "discovery_filter": "match_any", 00:17:20.901 "admin_cmd_passthru": { 00:17:20.901 "identify_ctrlr": false 00:17:20.901 }, 00:17:20.901 "dhchap_digests": [ 00:17:20.901 "sha256", 00:17:20.901 "sha384", 00:17:20.901 "sha512" 00:17:20.901 ], 00:17:20.901 "dhchap_dhgroups": [ 00:17:20.901 "null", 00:17:20.901 "ffdhe2048", 00:17:20.901 "ffdhe3072", 00:17:20.901 "ffdhe4096", 00:17:20.901 "ffdhe6144", 00:17:20.901 "ffdhe8192" 00:17:20.901 ] 00:17:20.901 } 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "method": "nvmf_set_max_subsystems", 00:17:20.901 "params": { 00:17:20.901 "max_subsystems": 1024 00:17:20.901 } 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "method": "nvmf_set_crdt", 00:17:20.901 "params": { 00:17:20.901 "crdt1": 0, 00:17:20.901 "crdt2": 0, 00:17:20.901 "crdt3": 0 00:17:20.901 } 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "method": "nvmf_create_transport", 00:17:20.901 "params": { 00:17:20.901 "trtype": "TCP", 00:17:20.901 "max_queue_depth": 128, 00:17:20.901 "max_io_qpairs_per_ctrlr": 127, 00:17:20.901 "in_capsule_data_size": 4096, 00:17:20.901 "max_io_size": 131072, 00:17:20.901 "io_unit_size": 131072, 00:17:20.901 "max_aq_depth": 128, 00:17:20.901 "num_shared_buffers": 511, 00:17:20.901 "buf_cache_size": 4294967295, 00:17:20.901 "dif_insert_or_strip": false, 00:17:20.901 "zcopy": false, 00:17:20.901 "c2h_success": true, 00:17:20.901 "sock_priority": 0, 00:17:20.901 "abort_timeout_sec": 1, 00:17:20.901 "ack_timeout": 0, 00:17:20.901 "data_wr_pool_size": 0 00:17:20.901 } 00:17:20.901 } 00:17:20.901 ] 00:17:20.901 }, 00:17:20.901 { 00:17:20.901 "subsystem": "iscsi", 00:17:20.901 "config": [ 00:17:20.901 { 00:17:20.901 "method": "iscsi_set_options", 00:17:20.901 "params": { 00:17:20.901 "node_base": "iqn.2016-06.io.spdk", 00:17:20.901 "max_sessions": 128, 00:17:20.901 "max_connections_per_session": 2, 00:17:20.901 "max_queue_depth": 64, 00:17:20.901 "default_time2wait": 2, 00:17:20.901 "default_time2retain": 20, 00:17:20.901 "first_burst_length": 8192, 00:17:20.901 "immediate_data": true, 00:17:20.901 "allow_duplicated_isid": false, 00:17:20.901 "error_recovery_level": 0, 00:17:20.901 "nop_timeout": 60, 00:17:20.901 "nop_in_interval": 30, 00:17:20.901 "disable_chap": false, 00:17:20.901 "require_chap": false, 00:17:20.901 "mutual_chap": false, 00:17:20.901 "chap_group": 0, 00:17:20.901 "max_large_datain_per_connection": 64, 00:17:20.901 "max_r2t_per_connection": 4, 00:17:20.901 "pdu_pool_size": 36864, 00:17:20.901 "immediate_data_pool_size": 16384, 00:17:20.901 "data_out_pool_size": 2048 00:17:20.901 } 00:17:20.901 } 00:17:20.901 ] 00:17:20.901 } 00:17:20.901 ] 00:17:20.901 } 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56378 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56378 ']' 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56378 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56378 00:17:20.901 killing process with pid 56378 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56378' 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56378 00:17:20.901 22:58:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56378 00:17:22.273 22:58:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56418 00:17:22.273 22:58:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:22.273 22:58:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56418 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56418 ']' 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56418 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56418 00:17:27.533 killing process with pid 56418 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56418' 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56418 00:17:27.533 22:59:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56418 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:28.465 ************************************ 00:17:28.465 END TEST skip_rpc_with_json 00:17:28.465 ************************************ 00:17:28.465 00:17:28.465 real 0m8.492s 00:17:28.465 user 0m8.069s 00:17:28.465 sys 0m0.607s 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:28.465 22:59:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:28.465 22:59:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:28.465 22:59:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.465 22:59:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.465 ************************************ 00:17:28.465 START TEST skip_rpc_with_delay 00:17:28.465 ************************************ 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:28.465 [2024-12-09 22:59:03.774778] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:28.465 ************************************ 00:17:28.465 END TEST skip_rpc_with_delay 00:17:28.465 ************************************ 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.465 00:17:28.465 real 0m0.124s 00:17:28.465 user 0m0.066s 00:17:28.465 sys 0m0.057s 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.465 22:59:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:17:28.723 22:59:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:17:28.723 22:59:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:28.723 22:59:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:28.723 22:59:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:28.723 22:59:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.723 22:59:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.723 ************************************ 00:17:28.723 START TEST exit_on_failed_rpc_init 00:17:28.723 ************************************ 00:17:28.723 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56540 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56540 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 56540 ']' 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.724 22:59:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.724 [2024-12-09 22:59:03.925033] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:28.724 [2024-12-09 22:59:03.925155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56540 ] 00:17:28.724 [2024-12-09 22:59:04.076641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.981 [2024-12-09 22:59:04.162071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:29.547 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:29.548 22:59:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:29.548 [2024-12-09 22:59:04.887650] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:29.548 [2024-12-09 22:59:04.887771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56553 ] 00:17:29.806 [2024-12-09 22:59:05.048843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.806 [2024-12-09 22:59:05.149942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.806 [2024-12-09 22:59:05.150199] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:29.806 [2024-12-09 22:59:05.150468] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:29.806 [2024-12-09 22:59:05.151120] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56540 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 56540 ']' 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 56540 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56540 00:17:30.064 killing process with pid 56540 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56540' 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 56540 00:17:30.064 22:59:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 56540 00:17:31.437 ************************************ 00:17:31.437 END TEST exit_on_failed_rpc_init 00:17:31.437 ************************************ 00:17:31.437 00:17:31.437 real 0m2.714s 00:17:31.437 user 0m3.072s 00:17:31.437 sys 0m0.393s 00:17:31.437 22:59:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.437 22:59:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.437 22:59:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:31.437 ************************************ 00:17:31.437 END TEST skip_rpc 00:17:31.437 ************************************ 00:17:31.437 00:17:31.437 real 0m17.889s 00:17:31.437 user 0m17.186s 00:17:31.437 sys 0m1.528s 00:17:31.437 22:59:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.437 22:59:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.437 22:59:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:31.437 22:59:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:31.437 22:59:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.437 22:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:31.437 ************************************ 00:17:31.437 START TEST rpc_client 00:17:31.437 ************************************ 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:31.437 * Looking for test storage... 00:17:31.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.437 22:59:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.437 --rc genhtml_branch_coverage=1 00:17:31.437 --rc genhtml_function_coverage=1 00:17:31.437 --rc genhtml_legend=1 00:17:31.437 --rc geninfo_all_blocks=1 00:17:31.437 --rc geninfo_unexecuted_blocks=1 00:17:31.437 00:17:31.437 ' 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.437 --rc genhtml_branch_coverage=1 00:17:31.437 --rc genhtml_function_coverage=1 00:17:31.437 --rc genhtml_legend=1 00:17:31.437 --rc geninfo_all_blocks=1 00:17:31.437 --rc geninfo_unexecuted_blocks=1 00:17:31.437 00:17:31.437 ' 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.437 --rc genhtml_branch_coverage=1 00:17:31.437 --rc genhtml_function_coverage=1 00:17:31.437 --rc genhtml_legend=1 00:17:31.437 --rc geninfo_all_blocks=1 00:17:31.437 --rc geninfo_unexecuted_blocks=1 00:17:31.437 00:17:31.437 ' 00:17:31.437 22:59:06 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:31.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.437 --rc genhtml_branch_coverage=1 00:17:31.437 --rc genhtml_function_coverage=1 00:17:31.437 --rc genhtml_legend=1 00:17:31.437 --rc geninfo_all_blocks=1 00:17:31.437 --rc geninfo_unexecuted_blocks=1 00:17:31.437 00:17:31.437 ' 00:17:31.437 22:59:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:31.701 OK 00:17:31.701 22:59:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:31.701 00:17:31.701 real 0m0.191s 00:17:31.701 user 0m0.110s 00:17:31.701 sys 0m0.089s 00:17:31.701 22:59:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.701 22:59:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:31.701 ************************************ 00:17:31.701 END TEST rpc_client 00:17:31.701 ************************************ 00:17:31.701 22:59:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:31.701 22:59:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:31.701 22:59:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.701 22:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:31.701 ************************************ 00:17:31.701 START TEST json_config 00:17:31.701 ************************************ 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.701 22:59:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.701 22:59:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.701 22:59:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.701 22:59:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.701 22:59:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.701 22:59:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:17:31.701 22:59:06 json_config -- scripts/common.sh@345 -- # : 1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.701 22:59:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.701 22:59:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@353 -- # local d=1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.701 22:59:06 json_config -- scripts/common.sh@355 -- # echo 1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.701 22:59:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@353 -- # local d=2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.701 22:59:06 json_config -- scripts/common.sh@355 -- # echo 2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.701 22:59:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.701 22:59:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.701 22:59:06 json_config -- scripts/common.sh@368 -- # return 0 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:31.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.701 --rc genhtml_branch_coverage=1 00:17:31.701 --rc genhtml_function_coverage=1 00:17:31.701 --rc genhtml_legend=1 00:17:31.701 --rc geninfo_all_blocks=1 00:17:31.701 --rc geninfo_unexecuted_blocks=1 00:17:31.701 00:17:31.701 ' 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:31.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.701 --rc genhtml_branch_coverage=1 00:17:31.701 --rc genhtml_function_coverage=1 00:17:31.701 --rc genhtml_legend=1 00:17:31.701 --rc geninfo_all_blocks=1 00:17:31.701 --rc geninfo_unexecuted_blocks=1 00:17:31.701 00:17:31.701 ' 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:31.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.701 --rc genhtml_branch_coverage=1 00:17:31.701 --rc genhtml_function_coverage=1 00:17:31.701 --rc genhtml_legend=1 00:17:31.701 --rc geninfo_all_blocks=1 00:17:31.701 --rc geninfo_unexecuted_blocks=1 00:17:31.701 00:17:31.701 ' 00:17:31.701 22:59:06 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:31.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.701 --rc genhtml_branch_coverage=1 00:17:31.701 --rc genhtml_function_coverage=1 00:17:31.701 --rc genhtml_legend=1 00:17:31.701 --rc geninfo_all_blocks=1 00:17:31.701 --rc geninfo_unexecuted_blocks=1 00:17:31.701 00:17:31.701 ' 00:17:31.701 22:59:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.701 22:59:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f3f4faf8-991c-49df-aa98-6b75bac91fa9 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f3f4faf8-991c-49df-aa98-6b75bac91fa9 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.701 22:59:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.701 22:59:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.701 22:59:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.701 22:59:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.701 22:59:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.701 22:59:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.701 22:59:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.701 22:59:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.701 22:59:07 json_config -- paths/export.sh@5 -- # export PATH 00:17:31.702 22:59:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@51 -- # : 0 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.702 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.702 22:59:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.702 WARNING: No tests are enabled so not running JSON configuration tests 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:17:31.702 22:59:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:17:31.702 00:17:31.702 real 0m0.143s 00:17:31.702 user 0m0.088s 00:17:31.702 sys 0m0.056s 00:17:31.702 ************************************ 00:17:31.702 END TEST json_config 00:17:31.702 ************************************ 00:17:31.702 22:59:07 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.702 22:59:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:31.702 22:59:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:31.702 22:59:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:31.702 22:59:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.702 22:59:07 -- common/autotest_common.sh@10 -- # set +x 00:17:31.702 ************************************ 00:17:31.702 START TEST json_config_extra_key 00:17:31.702 ************************************ 00:17:31.702 22:59:07 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:31.959 22:59:07 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:31.959 22:59:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:17:31.959 22:59:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:31.959 22:59:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.959 22:59:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:31.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.960 --rc genhtml_branch_coverage=1 00:17:31.960 --rc genhtml_function_coverage=1 00:17:31.960 --rc genhtml_legend=1 00:17:31.960 --rc geninfo_all_blocks=1 00:17:31.960 --rc geninfo_unexecuted_blocks=1 00:17:31.960 00:17:31.960 ' 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:31.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.960 --rc genhtml_branch_coverage=1 00:17:31.960 --rc genhtml_function_coverage=1 00:17:31.960 --rc genhtml_legend=1 00:17:31.960 --rc geninfo_all_blocks=1 00:17:31.960 --rc geninfo_unexecuted_blocks=1 00:17:31.960 00:17:31.960 ' 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:31.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.960 --rc genhtml_branch_coverage=1 00:17:31.960 --rc genhtml_function_coverage=1 00:17:31.960 --rc genhtml_legend=1 00:17:31.960 --rc geninfo_all_blocks=1 00:17:31.960 --rc geninfo_unexecuted_blocks=1 00:17:31.960 00:17:31.960 ' 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:31.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.960 --rc genhtml_branch_coverage=1 00:17:31.960 --rc genhtml_function_coverage=1 00:17:31.960 --rc genhtml_legend=1 00:17:31.960 --rc geninfo_all_blocks=1 00:17:31.960 --rc geninfo_unexecuted_blocks=1 00:17:31.960 00:17:31.960 ' 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f3f4faf8-991c-49df-aa98-6b75bac91fa9 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f3f4faf8-991c-49df-aa98-6b75bac91fa9 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.960 22:59:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.960 22:59:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.960 22:59:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.960 22:59:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.960 22:59:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:31.960 22:59:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.960 22:59:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:31.960 INFO: launching applications... 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:31.960 22:59:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56746 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:31.960 Waiting for target to run... 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56746 /var/tmp/spdk_tgt.sock 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 56746 ']' 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.960 22:59:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:31.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:31.960 22:59:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:31.961 22:59:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.961 22:59:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:31.961 [2024-12-09 22:59:07.266235] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:31.961 [2024-12-09 22:59:07.266329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56746 ] 00:17:32.526 [2024-12-09 22:59:07.584711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.526 [2024-12-09 22:59:07.677715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.092 00:17:33.092 INFO: shutting down applications... 00:17:33.092 22:59:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.092 22:59:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:33.092 22:59:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:33.092 22:59:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56746 ]] 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56746 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56746 00:17:33.092 22:59:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:33.350 22:59:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:33.350 22:59:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:33.350 22:59:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56746 00:17:33.350 22:59:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:33.915 22:59:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:33.915 22:59:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:33.915 22:59:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56746 00:17:33.915 22:59:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:34.485 22:59:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:34.485 22:59:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:34.485 22:59:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56746 00:17:34.485 22:59:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56746 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:35.051 22:59:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:35.051 SPDK target shutdown done 00:17:35.051 Success 00:17:35.051 22:59:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:35.051 ************************************ 00:17:35.051 END TEST json_config_extra_key 00:17:35.052 ************************************ 00:17:35.052 00:17:35.052 real 0m3.144s 00:17:35.052 user 0m2.714s 00:17:35.052 sys 0m0.401s 00:17:35.052 22:59:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.052 22:59:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:35.052 22:59:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:35.052 22:59:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.052 22:59:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.052 22:59:10 -- common/autotest_common.sh@10 -- # set +x 00:17:35.052 ************************************ 00:17:35.052 START TEST alias_rpc 00:17:35.052 ************************************ 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:35.052 * Looking for test storage... 00:17:35.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:17:35.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:35.052 22:59:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:35.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.052 --rc genhtml_branch_coverage=1 00:17:35.052 --rc genhtml_function_coverage=1 00:17:35.052 --rc genhtml_legend=1 00:17:35.052 --rc geninfo_all_blocks=1 00:17:35.052 --rc geninfo_unexecuted_blocks=1 00:17:35.052 00:17:35.052 ' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:35.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.052 --rc genhtml_branch_coverage=1 00:17:35.052 --rc genhtml_function_coverage=1 00:17:35.052 --rc genhtml_legend=1 00:17:35.052 --rc geninfo_all_blocks=1 00:17:35.052 --rc geninfo_unexecuted_blocks=1 00:17:35.052 00:17:35.052 ' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:35.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.052 --rc genhtml_branch_coverage=1 00:17:35.052 --rc genhtml_function_coverage=1 00:17:35.052 --rc genhtml_legend=1 00:17:35.052 --rc geninfo_all_blocks=1 00:17:35.052 --rc geninfo_unexecuted_blocks=1 00:17:35.052 00:17:35.052 ' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:35.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.052 --rc genhtml_branch_coverage=1 00:17:35.052 --rc genhtml_function_coverage=1 00:17:35.052 --rc genhtml_legend=1 00:17:35.052 --rc geninfo_all_blocks=1 00:17:35.052 --rc geninfo_unexecuted_blocks=1 00:17:35.052 00:17:35.052 ' 00:17:35.052 22:59:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:35.052 22:59:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56839 00:17:35.052 22:59:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56839 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 56839 ']' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.052 22:59:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.052 22:59:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:35.311 [2024-12-09 22:59:10.474940] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:35.311 [2024-12-09 22:59:10.475057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56839 ] 00:17:35.311 [2024-12-09 22:59:10.636001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.569 [2024-12-09 22:59:10.739049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.134 22:59:11 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.134 22:59:11 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:36.134 22:59:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:36.391 22:59:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56839 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 56839 ']' 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 56839 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56839 00:17:36.391 killing process with pid 56839 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56839' 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@973 -- # kill 56839 00:17:36.391 22:59:11 alias_rpc -- common/autotest_common.sh@978 -- # wait 56839 00:17:37.792 ************************************ 00:17:37.792 END TEST alias_rpc 00:17:37.792 ************************************ 00:17:37.792 00:17:37.792 real 0m2.906s 00:17:37.792 user 0m3.042s 00:17:37.793 sys 0m0.397s 00:17:37.793 22:59:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.793 22:59:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.051 22:59:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:17:38.051 22:59:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:38.051 22:59:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:38.051 22:59:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.051 22:59:13 -- common/autotest_common.sh@10 -- # set +x 00:17:38.051 ************************************ 00:17:38.051 START TEST spdkcli_tcp 00:17:38.051 ************************************ 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:38.051 * Looking for test storage... 00:17:38.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.051 22:59:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.051 --rc genhtml_branch_coverage=1 00:17:38.051 --rc genhtml_function_coverage=1 00:17:38.051 --rc genhtml_legend=1 00:17:38.051 --rc geninfo_all_blocks=1 00:17:38.051 --rc geninfo_unexecuted_blocks=1 00:17:38.051 00:17:38.051 ' 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.051 --rc genhtml_branch_coverage=1 00:17:38.051 --rc genhtml_function_coverage=1 00:17:38.051 --rc genhtml_legend=1 00:17:38.051 --rc geninfo_all_blocks=1 00:17:38.051 --rc geninfo_unexecuted_blocks=1 00:17:38.051 00:17:38.051 ' 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.051 --rc genhtml_branch_coverage=1 00:17:38.051 --rc genhtml_function_coverage=1 00:17:38.051 --rc genhtml_legend=1 00:17:38.051 --rc geninfo_all_blocks=1 00:17:38.051 --rc geninfo_unexecuted_blocks=1 00:17:38.051 00:17:38.051 ' 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.051 --rc genhtml_branch_coverage=1 00:17:38.051 --rc genhtml_function_coverage=1 00:17:38.051 --rc genhtml_legend=1 00:17:38.051 --rc geninfo_all_blocks=1 00:17:38.051 --rc geninfo_unexecuted_blocks=1 00:17:38.051 00:17:38.051 ' 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.051 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.051 22:59:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.052 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56935 00:17:38.052 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 56935 00:17:38.052 22:59:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:38.052 22:59:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 56935 ']' 00:17:38.052 22:59:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.052 22:59:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.052 22:59:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.052 22:59:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.052 22:59:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.052 [2024-12-09 22:59:13.396887] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:38.052 [2024-12-09 22:59:13.397172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56935 ] 00:17:38.309 [2024-12-09 22:59:13.553023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:38.309 [2024-12-09 22:59:13.641035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.309 [2024-12-09 22:59:13.641155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.903 22:59:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.903 22:59:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:17:38.903 22:59:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=56952 00:17:38.903 22:59:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:17:38.903 22:59:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:17:39.200 [ 00:17:39.200 "bdev_malloc_delete", 00:17:39.200 "bdev_malloc_create", 00:17:39.200 "bdev_null_resize", 00:17:39.200 "bdev_null_delete", 00:17:39.200 "bdev_null_create", 00:17:39.200 "bdev_nvme_cuse_unregister", 00:17:39.200 "bdev_nvme_cuse_register", 00:17:39.200 "bdev_opal_new_user", 00:17:39.200 "bdev_opal_set_lock_state", 00:17:39.200 "bdev_opal_delete", 00:17:39.200 "bdev_opal_get_info", 00:17:39.200 "bdev_opal_create", 00:17:39.200 "bdev_nvme_opal_revert", 00:17:39.200 "bdev_nvme_opal_init", 00:17:39.200 "bdev_nvme_send_cmd", 00:17:39.200 "bdev_nvme_set_keys", 00:17:39.200 "bdev_nvme_get_path_iostat", 00:17:39.200 "bdev_nvme_get_mdns_discovery_info", 00:17:39.200 "bdev_nvme_stop_mdns_discovery", 00:17:39.200 "bdev_nvme_start_mdns_discovery", 00:17:39.200 "bdev_nvme_set_multipath_policy", 00:17:39.200 "bdev_nvme_set_preferred_path", 00:17:39.200 "bdev_nvme_get_io_paths", 00:17:39.200 "bdev_nvme_remove_error_injection", 00:17:39.200 "bdev_nvme_add_error_injection", 00:17:39.200 "bdev_nvme_get_discovery_info", 00:17:39.200 "bdev_nvme_stop_discovery", 00:17:39.200 "bdev_nvme_start_discovery", 00:17:39.200 "bdev_nvme_get_controller_health_info", 00:17:39.200 "bdev_nvme_disable_controller", 00:17:39.200 "bdev_nvme_enable_controller", 00:17:39.200 "bdev_nvme_reset_controller", 00:17:39.200 "bdev_nvme_get_transport_statistics", 00:17:39.200 "bdev_nvme_apply_firmware", 00:17:39.200 "bdev_nvme_detach_controller", 00:17:39.200 "bdev_nvme_get_controllers", 00:17:39.200 "bdev_nvme_attach_controller", 00:17:39.200 "bdev_nvme_set_hotplug", 00:17:39.200 "bdev_nvme_set_options", 00:17:39.200 "bdev_passthru_delete", 00:17:39.200 "bdev_passthru_create", 00:17:39.200 "bdev_lvol_set_parent_bdev", 00:17:39.200 "bdev_lvol_set_parent", 00:17:39.200 "bdev_lvol_check_shallow_copy", 00:17:39.200 "bdev_lvol_start_shallow_copy", 00:17:39.200 "bdev_lvol_grow_lvstore", 00:17:39.200 "bdev_lvol_get_lvols", 00:17:39.200 "bdev_lvol_get_lvstores", 00:17:39.200 "bdev_lvol_delete", 00:17:39.200 "bdev_lvol_set_read_only", 00:17:39.200 "bdev_lvol_resize", 00:17:39.200 "bdev_lvol_decouple_parent", 00:17:39.200 "bdev_lvol_inflate", 00:17:39.200 "bdev_lvol_rename", 00:17:39.200 "bdev_lvol_clone_bdev", 00:17:39.200 "bdev_lvol_clone", 00:17:39.200 "bdev_lvol_snapshot", 00:17:39.200 "bdev_lvol_create", 00:17:39.200 "bdev_lvol_delete_lvstore", 00:17:39.200 "bdev_lvol_rename_lvstore", 00:17:39.200 "bdev_lvol_create_lvstore", 00:17:39.200 "bdev_raid_set_options", 00:17:39.201 "bdev_raid_remove_base_bdev", 00:17:39.201 "bdev_raid_add_base_bdev", 00:17:39.201 "bdev_raid_delete", 00:17:39.201 "bdev_raid_create", 00:17:39.201 "bdev_raid_get_bdevs", 00:17:39.201 "bdev_error_inject_error", 00:17:39.201 "bdev_error_delete", 00:17:39.201 "bdev_error_create", 00:17:39.201 "bdev_split_delete", 00:17:39.201 "bdev_split_create", 00:17:39.201 "bdev_delay_delete", 00:17:39.201 "bdev_delay_create", 00:17:39.201 "bdev_delay_update_latency", 00:17:39.201 "bdev_zone_block_delete", 00:17:39.201 "bdev_zone_block_create", 00:17:39.201 "blobfs_create", 00:17:39.201 "blobfs_detect", 00:17:39.201 "blobfs_set_cache_size", 00:17:39.201 "bdev_aio_delete", 00:17:39.201 "bdev_aio_rescan", 00:17:39.201 "bdev_aio_create", 00:17:39.201 "bdev_ftl_set_property", 00:17:39.201 "bdev_ftl_get_properties", 00:17:39.201 "bdev_ftl_get_stats", 00:17:39.201 "bdev_ftl_unmap", 00:17:39.201 "bdev_ftl_unload", 00:17:39.201 "bdev_ftl_delete", 00:17:39.201 "bdev_ftl_load", 00:17:39.201 "bdev_ftl_create", 00:17:39.201 "bdev_virtio_attach_controller", 00:17:39.201 "bdev_virtio_scsi_get_devices", 00:17:39.201 "bdev_virtio_detach_controller", 00:17:39.201 "bdev_virtio_blk_set_hotplug", 00:17:39.201 "bdev_iscsi_delete", 00:17:39.201 "bdev_iscsi_create", 00:17:39.201 "bdev_iscsi_set_options", 00:17:39.201 "accel_error_inject_error", 00:17:39.201 "ioat_scan_accel_module", 00:17:39.201 "dsa_scan_accel_module", 00:17:39.201 "iaa_scan_accel_module", 00:17:39.201 "keyring_file_remove_key", 00:17:39.201 "keyring_file_add_key", 00:17:39.201 "keyring_linux_set_options", 00:17:39.201 "fsdev_aio_delete", 00:17:39.201 "fsdev_aio_create", 00:17:39.201 "iscsi_get_histogram", 00:17:39.201 "iscsi_enable_histogram", 00:17:39.201 "iscsi_set_options", 00:17:39.201 "iscsi_get_auth_groups", 00:17:39.201 "iscsi_auth_group_remove_secret", 00:17:39.201 "iscsi_auth_group_add_secret", 00:17:39.201 "iscsi_delete_auth_group", 00:17:39.201 "iscsi_create_auth_group", 00:17:39.201 "iscsi_set_discovery_auth", 00:17:39.201 "iscsi_get_options", 00:17:39.201 "iscsi_target_node_request_logout", 00:17:39.201 "iscsi_target_node_set_redirect", 00:17:39.201 "iscsi_target_node_set_auth", 00:17:39.201 "iscsi_target_node_add_lun", 00:17:39.201 "iscsi_get_stats", 00:17:39.201 "iscsi_get_connections", 00:17:39.201 "iscsi_portal_group_set_auth", 00:17:39.201 "iscsi_start_portal_group", 00:17:39.201 "iscsi_delete_portal_group", 00:17:39.201 "iscsi_create_portal_group", 00:17:39.201 "iscsi_get_portal_groups", 00:17:39.201 "iscsi_delete_target_node", 00:17:39.201 "iscsi_target_node_remove_pg_ig_maps", 00:17:39.201 "iscsi_target_node_add_pg_ig_maps", 00:17:39.201 "iscsi_create_target_node", 00:17:39.201 "iscsi_get_target_nodes", 00:17:39.201 "iscsi_delete_initiator_group", 00:17:39.201 "iscsi_initiator_group_remove_initiators", 00:17:39.201 "iscsi_initiator_group_add_initiators", 00:17:39.201 "iscsi_create_initiator_group", 00:17:39.201 "iscsi_get_initiator_groups", 00:17:39.201 "nvmf_set_crdt", 00:17:39.201 "nvmf_set_config", 00:17:39.201 "nvmf_set_max_subsystems", 00:17:39.201 "nvmf_stop_mdns_prr", 00:17:39.201 "nvmf_publish_mdns_prr", 00:17:39.201 "nvmf_subsystem_get_listeners", 00:17:39.201 "nvmf_subsystem_get_qpairs", 00:17:39.201 "nvmf_subsystem_get_controllers", 00:17:39.201 "nvmf_get_stats", 00:17:39.201 "nvmf_get_transports", 00:17:39.201 "nvmf_create_transport", 00:17:39.201 "nvmf_get_targets", 00:17:39.201 "nvmf_delete_target", 00:17:39.201 "nvmf_create_target", 00:17:39.201 "nvmf_subsystem_allow_any_host", 00:17:39.201 "nvmf_subsystem_set_keys", 00:17:39.201 "nvmf_subsystem_remove_host", 00:17:39.201 "nvmf_subsystem_add_host", 00:17:39.201 "nvmf_ns_remove_host", 00:17:39.201 "nvmf_ns_add_host", 00:17:39.201 "nvmf_subsystem_remove_ns", 00:17:39.201 "nvmf_subsystem_set_ns_ana_group", 00:17:39.201 "nvmf_subsystem_add_ns", 00:17:39.201 "nvmf_subsystem_listener_set_ana_state", 00:17:39.201 "nvmf_discovery_get_referrals", 00:17:39.201 "nvmf_discovery_remove_referral", 00:17:39.201 "nvmf_discovery_add_referral", 00:17:39.201 "nvmf_subsystem_remove_listener", 00:17:39.201 "nvmf_subsystem_add_listener", 00:17:39.201 "nvmf_delete_subsystem", 00:17:39.201 "nvmf_create_subsystem", 00:17:39.201 "nvmf_get_subsystems", 00:17:39.201 "env_dpdk_get_mem_stats", 00:17:39.201 "nbd_get_disks", 00:17:39.201 "nbd_stop_disk", 00:17:39.201 "nbd_start_disk", 00:17:39.201 "ublk_recover_disk", 00:17:39.201 "ublk_get_disks", 00:17:39.201 "ublk_stop_disk", 00:17:39.201 "ublk_start_disk", 00:17:39.201 "ublk_destroy_target", 00:17:39.201 "ublk_create_target", 00:17:39.201 "virtio_blk_create_transport", 00:17:39.201 "virtio_blk_get_transports", 00:17:39.201 "vhost_controller_set_coalescing", 00:17:39.201 "vhost_get_controllers", 00:17:39.201 "vhost_delete_controller", 00:17:39.201 "vhost_create_blk_controller", 00:17:39.201 "vhost_scsi_controller_remove_target", 00:17:39.201 "vhost_scsi_controller_add_target", 00:17:39.201 "vhost_start_scsi_controller", 00:17:39.201 "vhost_create_scsi_controller", 00:17:39.201 "thread_set_cpumask", 00:17:39.201 "scheduler_set_options", 00:17:39.201 "framework_get_governor", 00:17:39.201 "framework_get_scheduler", 00:17:39.201 "framework_set_scheduler", 00:17:39.201 "framework_get_reactors", 00:17:39.201 "thread_get_io_channels", 00:17:39.201 "thread_get_pollers", 00:17:39.201 "thread_get_stats", 00:17:39.201 "framework_monitor_context_switch", 00:17:39.201 "spdk_kill_instance", 00:17:39.201 "log_enable_timestamps", 00:17:39.201 "log_get_flags", 00:17:39.201 "log_clear_flag", 00:17:39.201 "log_set_flag", 00:17:39.201 "log_get_level", 00:17:39.201 "log_set_level", 00:17:39.201 "log_get_print_level", 00:17:39.201 "log_set_print_level", 00:17:39.201 "framework_enable_cpumask_locks", 00:17:39.201 "framework_disable_cpumask_locks", 00:17:39.201 "framework_wait_init", 00:17:39.201 "framework_start_init", 00:17:39.201 "scsi_get_devices", 00:17:39.201 "bdev_get_histogram", 00:17:39.201 "bdev_enable_histogram", 00:17:39.201 "bdev_set_qos_limit", 00:17:39.201 "bdev_set_qd_sampling_period", 00:17:39.201 "bdev_get_bdevs", 00:17:39.201 "bdev_reset_iostat", 00:17:39.201 "bdev_get_iostat", 00:17:39.201 "bdev_examine", 00:17:39.201 "bdev_wait_for_examine", 00:17:39.201 "bdev_set_options", 00:17:39.201 "accel_get_stats", 00:17:39.201 "accel_set_options", 00:17:39.201 "accel_set_driver", 00:17:39.201 "accel_crypto_key_destroy", 00:17:39.201 "accel_crypto_keys_get", 00:17:39.201 "accel_crypto_key_create", 00:17:39.201 "accel_assign_opc", 00:17:39.201 "accel_get_module_info", 00:17:39.201 "accel_get_opc_assignments", 00:17:39.201 "vmd_rescan", 00:17:39.201 "vmd_remove_device", 00:17:39.201 "vmd_enable", 00:17:39.201 "sock_get_default_impl", 00:17:39.201 "sock_set_default_impl", 00:17:39.201 "sock_impl_set_options", 00:17:39.201 "sock_impl_get_options", 00:17:39.201 "iobuf_get_stats", 00:17:39.201 "iobuf_set_options", 00:17:39.201 "keyring_get_keys", 00:17:39.201 "framework_get_pci_devices", 00:17:39.201 "framework_get_config", 00:17:39.201 "framework_get_subsystems", 00:17:39.201 "fsdev_set_opts", 00:17:39.201 "fsdev_get_opts", 00:17:39.201 "trace_get_info", 00:17:39.201 "trace_get_tpoint_group_mask", 00:17:39.201 "trace_disable_tpoint_group", 00:17:39.201 "trace_enable_tpoint_group", 00:17:39.201 "trace_clear_tpoint_mask", 00:17:39.201 "trace_set_tpoint_mask", 00:17:39.201 "notify_get_notifications", 00:17:39.201 "notify_get_types", 00:17:39.201 "spdk_get_version", 00:17:39.201 "rpc_get_methods" 00:17:39.201 ] 00:17:39.201 22:59:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.201 22:59:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:39.201 22:59:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 56935 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 56935 ']' 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 56935 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56935 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.201 killing process with pid 56935 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56935' 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 56935 00:17:39.201 22:59:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 56935 00:17:40.575 ************************************ 00:17:40.575 END TEST spdkcli_tcp 00:17:40.575 ************************************ 00:17:40.575 00:17:40.575 real 0m2.538s 00:17:40.575 user 0m4.552s 00:17:40.575 sys 0m0.453s 00:17:40.575 22:59:15 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.575 22:59:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 22:59:15 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:40.575 22:59:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:40.575 22:59:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.575 22:59:15 -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 ************************************ 00:17:40.575 START TEST dpdk_mem_utility 00:17:40.575 ************************************ 00:17:40.575 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:40.575 * Looking for test storage... 00:17:40.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:17:40.575 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:40.575 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:17:40.575 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:40.575 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:40.575 22:59:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.575 22:59:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.575 22:59:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.575 22:59:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.575 22:59:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:17:40.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.576 22:59:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.576 --rc genhtml_branch_coverage=1 00:17:40.576 --rc genhtml_function_coverage=1 00:17:40.576 --rc genhtml_legend=1 00:17:40.576 --rc geninfo_all_blocks=1 00:17:40.576 --rc geninfo_unexecuted_blocks=1 00:17:40.576 00:17:40.576 ' 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.576 --rc genhtml_branch_coverage=1 00:17:40.576 --rc genhtml_function_coverage=1 00:17:40.576 --rc genhtml_legend=1 00:17:40.576 --rc geninfo_all_blocks=1 00:17:40.576 --rc geninfo_unexecuted_blocks=1 00:17:40.576 00:17:40.576 ' 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.576 --rc genhtml_branch_coverage=1 00:17:40.576 --rc genhtml_function_coverage=1 00:17:40.576 --rc genhtml_legend=1 00:17:40.576 --rc geninfo_all_blocks=1 00:17:40.576 --rc geninfo_unexecuted_blocks=1 00:17:40.576 00:17:40.576 ' 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:40.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.576 --rc genhtml_branch_coverage=1 00:17:40.576 --rc genhtml_function_coverage=1 00:17:40.576 --rc genhtml_legend=1 00:17:40.576 --rc geninfo_all_blocks=1 00:17:40.576 --rc geninfo_unexecuted_blocks=1 00:17:40.576 00:17:40.576 ' 00:17:40.576 22:59:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:40.576 22:59:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57041 00:17:40.576 22:59:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.576 22:59:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57041 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57041 ']' 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.576 22:59:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:40.834 [2024-12-09 22:59:15.958187] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:40.834 [2024-12-09 22:59:15.958286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57041 ] 00:17:40.834 [2024-12-09 22:59:16.111443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.091 [2024-12-09 22:59:16.198068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.423 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.423 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:17:41.423 22:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:17:41.423 22:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:17:41.423 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.423 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:41.682 { 00:17:41.682 "filename": "/tmp/spdk_mem_dump.txt" 00:17:41.682 } 00:17:41.682 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.682 22:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:41.682 DPDK memory size 824.000000 MiB in 1 heap(s) 00:17:41.682 1 heaps totaling size 824.000000 MiB 00:17:41.682 size: 824.000000 MiB heap id: 0 00:17:41.682 end heaps---------- 00:17:41.682 9 mempools totaling size 603.782043 MiB 00:17:41.682 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:17:41.682 size: 158.602051 MiB name: PDU_data_out_Pool 00:17:41.682 size: 100.555481 MiB name: bdev_io_57041 00:17:41.682 size: 50.003479 MiB name: msgpool_57041 00:17:41.682 size: 36.509338 MiB name: fsdev_io_57041 00:17:41.682 size: 21.763794 MiB name: PDU_Pool 00:17:41.682 size: 19.513306 MiB name: SCSI_TASK_Pool 00:17:41.682 size: 4.133484 MiB name: evtpool_57041 00:17:41.682 size: 0.026123 MiB name: Session_Pool 00:17:41.682 end mempools------- 00:17:41.682 6 memzones totaling size 4.142822 MiB 00:17:41.682 size: 1.000366 MiB name: RG_ring_0_57041 00:17:41.682 size: 1.000366 MiB name: RG_ring_1_57041 00:17:41.682 size: 1.000366 MiB name: RG_ring_4_57041 00:17:41.682 size: 1.000366 MiB name: RG_ring_5_57041 00:17:41.682 size: 0.125366 MiB name: RG_ring_2_57041 00:17:41.682 size: 0.015991 MiB name: RG_ring_3_57041 00:17:41.682 end memzones------- 00:17:41.682 22:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:17:41.682 heap id: 0 total size: 824.000000 MiB number of busy elements: 328 number of free elements: 18 00:17:41.682 list of free elements. size: 16.778198 MiB 00:17:41.682 element at address: 0x200006400000 with size: 1.995972 MiB 00:17:41.682 element at address: 0x20000a600000 with size: 1.995972 MiB 00:17:41.682 element at address: 0x200003e00000 with size: 1.991028 MiB 00:17:41.682 element at address: 0x200019500040 with size: 0.999939 MiB 00:17:41.682 element at address: 0x200019900040 with size: 0.999939 MiB 00:17:41.682 element at address: 0x200019a00000 with size: 0.999084 MiB 00:17:41.682 element at address: 0x200032600000 with size: 0.994324 MiB 00:17:41.682 element at address: 0x200000400000 with size: 0.992004 MiB 00:17:41.682 element at address: 0x200019200000 with size: 0.959656 MiB 00:17:41.682 element at address: 0x200019d00040 with size: 0.936401 MiB 00:17:41.682 element at address: 0x200000200000 with size: 0.716980 MiB 00:17:41.682 element at address: 0x20001b400000 with size: 0.558777 MiB 00:17:41.682 element at address: 0x200000c00000 with size: 0.489197 MiB 00:17:41.682 element at address: 0x200019600000 with size: 0.487976 MiB 00:17:41.682 element at address: 0x200019e00000 with size: 0.485413 MiB 00:17:41.682 element at address: 0x200012c00000 with size: 0.433228 MiB 00:17:41.682 element at address: 0x200028800000 with size: 0.391418 MiB 00:17:41.682 element at address: 0x200000800000 with size: 0.350891 MiB 00:17:41.682 list of standard malloc elements. size: 199.290894 MiB 00:17:41.682 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:17:41.682 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:17:41.682 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:17:41.682 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:17:41.682 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:17:41.682 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:17:41.682 element at address: 0x200019deff40 with size: 0.062683 MiB 00:17:41.682 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:17:41.682 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:17:41.682 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:17:41.682 element at address: 0x200012bff040 with size: 0.000305 MiB 00:17:41.682 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:17:41.682 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:17:41.682 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:17:41.682 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:17:41.682 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200000cff000 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff180 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff280 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff380 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff480 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff580 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff680 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff780 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff880 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bff980 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200019affc40 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f0c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:17:41.683 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:17:41.684 element at address: 0x200028864340 with size: 0.000244 MiB 00:17:41.684 element at address: 0x200028864440 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b100 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b380 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b480 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b580 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b680 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b780 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b880 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886b980 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886be80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c080 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c180 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c280 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c380 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c480 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c580 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c680 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c780 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c880 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886c980 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d080 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d180 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d280 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d380 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d480 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d580 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d680 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d780 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d880 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886d980 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886da80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886db80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886de80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886df80 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886e080 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886e180 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886e280 with size: 0.000244 MiB 00:17:41.684 element at address: 0x20002886e380 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886e480 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886e580 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886e680 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886e780 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886e880 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886e980 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f080 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f180 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f280 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f380 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f480 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f580 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f680 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f780 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f880 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886f980 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:17:41.685 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:17:41.685 list of memzone associated elements. size: 607.930908 MiB 00:17:41.685 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:17:41.685 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:17:41.685 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:17:41.685 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:17:41.685 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:17:41.685 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57041_0 00:17:41.685 element at address: 0x200000dff340 with size: 48.003113 MiB 00:17:41.685 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57041_0 00:17:41.685 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:17:41.685 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57041_0 00:17:41.685 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:17:41.685 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:17:41.685 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:17:41.685 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:17:41.685 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:17:41.685 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57041_0 00:17:41.685 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:17:41.685 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57041 00:17:41.685 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:17:41.685 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57041 00:17:41.685 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:17:41.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:17:41.685 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:17:41.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:17:41.685 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:17:41.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:17:41.685 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:17:41.685 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:17:41.685 element at address: 0x200000cff100 with size: 1.000549 MiB 00:17:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57041 00:17:41.685 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:17:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57041 00:17:41.685 element at address: 0x200019affd40 with size: 1.000549 MiB 00:17:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57041 00:17:41.685 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:17:41.685 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57041 00:17:41.685 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:17:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57041 00:17:41.685 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:17:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57041 00:17:41.685 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:17:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:17:41.685 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:17:41.685 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:17:41.685 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:17:41.685 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:17:41.685 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:17:41.685 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57041 00:17:41.685 element at address: 0x20000085df80 with size: 0.125549 MiB 00:17:41.685 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57041 00:17:41.685 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:17:41.685 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:17:41.685 element at address: 0x200028864540 with size: 0.023804 MiB 00:17:41.685 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:17:41.685 element at address: 0x200000859d40 with size: 0.016174 MiB 00:17:41.685 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57041 00:17:41.685 element at address: 0x20002886a6c0 with size: 0.002502 MiB 00:17:41.685 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:17:41.685 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:17:41.685 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57041 00:17:41.685 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:17:41.685 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57041 00:17:41.685 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:17:41.685 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57041 00:17:41.685 element at address: 0x20002886b200 with size: 0.000366 MiB 00:17:41.685 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:17:41.685 22:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:17:41.685 22:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57041 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57041 ']' 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57041 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57041 00:17:41.685 killing process with pid 57041 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57041' 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57041 00:17:41.685 22:59:16 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57041 00:17:43.058 ************************************ 00:17:43.058 END TEST dpdk_mem_utility 00:17:43.058 ************************************ 00:17:43.058 00:17:43.058 real 0m2.387s 00:17:43.058 user 0m2.393s 00:17:43.058 sys 0m0.385s 00:17:43.058 22:59:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.058 22:59:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:43.059 22:59:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:43.059 22:59:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:43.059 22:59:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.059 22:59:18 -- common/autotest_common.sh@10 -- # set +x 00:17:43.059 ************************************ 00:17:43.059 START TEST event 00:17:43.059 ************************************ 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:43.059 * Looking for test storage... 00:17:43.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:43.059 22:59:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.059 22:59:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.059 22:59:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.059 22:59:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.059 22:59:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.059 22:59:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.059 22:59:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.059 22:59:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.059 22:59:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.059 22:59:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.059 22:59:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.059 22:59:18 event -- scripts/common.sh@344 -- # case "$op" in 00:17:43.059 22:59:18 event -- scripts/common.sh@345 -- # : 1 00:17:43.059 22:59:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.059 22:59:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.059 22:59:18 event -- scripts/common.sh@365 -- # decimal 1 00:17:43.059 22:59:18 event -- scripts/common.sh@353 -- # local d=1 00:17:43.059 22:59:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.059 22:59:18 event -- scripts/common.sh@355 -- # echo 1 00:17:43.059 22:59:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.059 22:59:18 event -- scripts/common.sh@366 -- # decimal 2 00:17:43.059 22:59:18 event -- scripts/common.sh@353 -- # local d=2 00:17:43.059 22:59:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.059 22:59:18 event -- scripts/common.sh@355 -- # echo 2 00:17:43.059 22:59:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.059 22:59:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.059 22:59:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.059 22:59:18 event -- scripts/common.sh@368 -- # return 0 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:43.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.059 --rc genhtml_branch_coverage=1 00:17:43.059 --rc genhtml_function_coverage=1 00:17:43.059 --rc genhtml_legend=1 00:17:43.059 --rc geninfo_all_blocks=1 00:17:43.059 --rc geninfo_unexecuted_blocks=1 00:17:43.059 00:17:43.059 ' 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:43.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.059 --rc genhtml_branch_coverage=1 00:17:43.059 --rc genhtml_function_coverage=1 00:17:43.059 --rc genhtml_legend=1 00:17:43.059 --rc geninfo_all_blocks=1 00:17:43.059 --rc geninfo_unexecuted_blocks=1 00:17:43.059 00:17:43.059 ' 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:43.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.059 --rc genhtml_branch_coverage=1 00:17:43.059 --rc genhtml_function_coverage=1 00:17:43.059 --rc genhtml_legend=1 00:17:43.059 --rc geninfo_all_blocks=1 00:17:43.059 --rc geninfo_unexecuted_blocks=1 00:17:43.059 00:17:43.059 ' 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:43.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.059 --rc genhtml_branch_coverage=1 00:17:43.059 --rc genhtml_function_coverage=1 00:17:43.059 --rc genhtml_legend=1 00:17:43.059 --rc geninfo_all_blocks=1 00:17:43.059 --rc geninfo_unexecuted_blocks=1 00:17:43.059 00:17:43.059 ' 00:17:43.059 22:59:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:43.059 22:59:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:17:43.059 22:59:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:17:43.059 22:59:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.059 22:59:18 event -- common/autotest_common.sh@10 -- # set +x 00:17:43.059 ************************************ 00:17:43.059 START TEST event_perf 00:17:43.059 ************************************ 00:17:43.059 22:59:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:43.059 Running I/O for 1 seconds...[2024-12-09 22:59:18.356036] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:43.059 [2024-12-09 22:59:18.356228] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57132 ] 00:17:43.317 [2024-12-09 22:59:18.511573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.317 [2024-12-09 22:59:18.618644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.317 Running I/O for 1 seconds...[2024-12-09 22:59:18.619186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.317 [2024-12-09 22:59:18.619432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.317 [2024-12-09 22:59:18.619455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.689 00:17:44.689 lcore 0: 200236 00:17:44.689 lcore 1: 200232 00:17:44.689 lcore 2: 200234 00:17:44.689 lcore 3: 200235 00:17:44.689 done. 00:17:44.689 00:17:44.689 real 0m1.459s 00:17:44.689 user 0m4.250s 00:17:44.689 sys 0m0.084s 00:17:44.689 22:59:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.689 22:59:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:17:44.689 ************************************ 00:17:44.689 END TEST event_perf 00:17:44.689 ************************************ 00:17:44.689 22:59:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:44.689 22:59:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:44.689 22:59:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.689 22:59:19 event -- common/autotest_common.sh@10 -- # set +x 00:17:44.689 ************************************ 00:17:44.689 START TEST event_reactor 00:17:44.689 ************************************ 00:17:44.689 22:59:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:44.689 [2024-12-09 22:59:19.865823] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:44.689 [2024-12-09 22:59:19.866135] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57172 ] 00:17:44.689 [2024-12-09 22:59:20.028570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.946 [2024-12-09 22:59:20.131077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.319 test_start 00:17:46.319 oneshot 00:17:46.319 tick 100 00:17:46.319 tick 100 00:17:46.319 tick 250 00:17:46.319 tick 100 00:17:46.319 tick 100 00:17:46.319 tick 250 00:17:46.319 tick 100 00:17:46.319 tick 500 00:17:46.319 tick 100 00:17:46.319 tick 100 00:17:46.319 tick 250 00:17:46.319 tick 100 00:17:46.319 tick 100 00:17:46.319 test_end 00:17:46.319 00:17:46.319 real 0m1.449s 00:17:46.319 user 0m1.275s 00:17:46.319 sys 0m0.066s 00:17:46.319 22:59:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.319 22:59:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:17:46.319 ************************************ 00:17:46.319 END TEST event_reactor 00:17:46.319 ************************************ 00:17:46.319 22:59:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:46.319 22:59:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:46.319 22:59:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.319 22:59:21 event -- common/autotest_common.sh@10 -- # set +x 00:17:46.319 ************************************ 00:17:46.319 START TEST event_reactor_perf 00:17:46.319 ************************************ 00:17:46.319 22:59:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:46.319 [2024-12-09 22:59:21.366600] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:46.320 [2024-12-09 22:59:21.366767] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57203 ] 00:17:46.320 [2024-12-09 22:59:21.544231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.320 [2024-12-09 22:59:21.646736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.694 test_start 00:17:47.694 test_end 00:17:47.694 Performance: 310124 events per second 00:17:47.694 ************************************ 00:17:47.694 END TEST event_reactor_perf 00:17:47.694 ************************************ 00:17:47.694 00:17:47.694 real 0m1.478s 00:17:47.694 user 0m1.287s 00:17:47.694 sys 0m0.082s 00:17:47.694 22:59:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.694 22:59:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:17:47.694 22:59:22 event -- event/event.sh@49 -- # uname -s 00:17:47.694 22:59:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:17:47.694 22:59:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:47.694 22:59:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.694 22:59:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.694 22:59:22 event -- common/autotest_common.sh@10 -- # set +x 00:17:47.694 ************************************ 00:17:47.694 START TEST event_scheduler 00:17:47.694 ************************************ 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:47.694 * Looking for test storage... 00:17:47.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:17:47.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.694 22:59:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.694 --rc genhtml_branch_coverage=1 00:17:47.694 --rc genhtml_function_coverage=1 00:17:47.694 --rc genhtml_legend=1 00:17:47.694 --rc geninfo_all_blocks=1 00:17:47.694 --rc geninfo_unexecuted_blocks=1 00:17:47.694 00:17:47.694 ' 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.694 --rc genhtml_branch_coverage=1 00:17:47.694 --rc genhtml_function_coverage=1 00:17:47.694 --rc genhtml_legend=1 00:17:47.694 --rc geninfo_all_blocks=1 00:17:47.694 --rc geninfo_unexecuted_blocks=1 00:17:47.694 00:17:47.694 ' 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.694 --rc genhtml_branch_coverage=1 00:17:47.694 --rc genhtml_function_coverage=1 00:17:47.694 --rc genhtml_legend=1 00:17:47.694 --rc geninfo_all_blocks=1 00:17:47.694 --rc geninfo_unexecuted_blocks=1 00:17:47.694 00:17:47.694 ' 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.694 --rc genhtml_branch_coverage=1 00:17:47.694 --rc genhtml_function_coverage=1 00:17:47.694 --rc genhtml_legend=1 00:17:47.694 --rc geninfo_all_blocks=1 00:17:47.694 --rc geninfo_unexecuted_blocks=1 00:17:47.694 00:17:47.694 ' 00:17:47.694 22:59:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:17:47.694 22:59:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57279 00:17:47.694 22:59:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:17:47.694 22:59:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57279 00:17:47.694 22:59:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57279 ']' 00:17:47.695 22:59:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.695 22:59:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.695 22:59:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.695 22:59:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.695 22:59:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:47.695 22:59:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:17:47.695 [2024-12-09 22:59:23.053453] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:47.695 [2024-12-09 22:59:23.053581] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57279 ] 00:17:47.952 [2024-12-09 22:59:23.209467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.210 [2024-12-09 22:59:23.317997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.210 [2024-12-09 22:59:23.318213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.210 [2024-12-09 22:59:23.318433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.210 [2024-12-09 22:59:23.318620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.799 22:59:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.799 22:59:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:17:48.799 22:59:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:17:48.799 22:59:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.799 22:59:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:48.799 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:48.799 POWER: Cannot set governor of lcore 0 to userspace 00:17:48.799 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:48.799 POWER: Cannot set governor of lcore 0 to performance 00:17:48.799 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:48.799 POWER: Cannot set governor of lcore 0 to userspace 00:17:48.799 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:48.799 POWER: Cannot set governor of lcore 0 to userspace 00:17:48.799 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:17:48.799 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:17:48.799 POWER: Unable to set Power Management Environment for lcore 0 00:17:48.799 [2024-12-09 22:59:23.936001] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:17:48.799 [2024-12-09 22:59:23.936022] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:17:48.799 [2024-12-09 22:59:23.936032] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:17:48.799 [2024-12-09 22:59:23.936049] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:17:48.799 [2024-12-09 22:59:23.936057] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:17:48.799 [2024-12-09 22:59:23.936066] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:17:48.800 22:59:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.800 22:59:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:17:48.800 22:59:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.800 22:59:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 [2024-12-09 22:59:24.165291] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:17:49.058 22:59:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.058 22:59:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:17:49.058 22:59:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.058 22:59:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.058 22:59:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 ************************************ 00:17:49.058 START TEST scheduler_create_thread 00:17:49.058 ************************************ 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 2 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 3 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.058 4 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.058 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 5 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 6 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 7 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 8 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 9 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 10 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.059 22:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:50.036 22:59:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.036 22:59:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:17:50.036 22:59:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:17:50.036 22:59:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.036 22:59:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:50.969 22:59:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.969 ************************************ 00:17:50.969 END TEST scheduler_create_thread 00:17:50.969 ************************************ 00:17:50.969 00:17:50.969 real 0m2.136s 00:17:50.969 user 0m0.013s 00:17:50.969 sys 0m0.006s 00:17:50.969 22:59:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.969 22:59:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:51.226 22:59:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:51.226 22:59:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57279 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57279 ']' 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57279 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57279 00:17:51.226 killing process with pid 57279 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57279' 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57279 00:17:51.226 22:59:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57279 00:17:51.484 [2024-12-09 22:59:26.792459] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:17:52.421 00:17:52.421 real 0m4.673s 00:17:52.421 user 0m8.099s 00:17:52.421 sys 0m0.345s 00:17:52.421 22:59:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.421 ************************************ 00:17:52.421 END TEST event_scheduler 00:17:52.421 ************************************ 00:17:52.421 22:59:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:52.421 22:59:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:17:52.421 22:59:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:17:52.421 22:59:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.421 22:59:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.421 22:59:27 event -- common/autotest_common.sh@10 -- # set +x 00:17:52.421 ************************************ 00:17:52.421 START TEST app_repeat 00:17:52.421 ************************************ 00:17:52.421 22:59:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:17:52.421 Process app_repeat pid: 57379 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57379 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57379' 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:52.421 spdk_app_start Round 0 00:17:52.421 22:59:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:17:52.422 22:59:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57379 /var/tmp/spdk-nbd.sock 00:17:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:52.422 22:59:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:17:52.422 22:59:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57379 ']' 00:17:52.422 22:59:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:52.422 22:59:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.422 22:59:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:52.422 22:59:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.422 22:59:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:52.422 [2024-12-09 22:59:27.614686] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:17:52.422 [2024-12-09 22:59:27.614971] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57379 ] 00:17:52.422 [2024-12-09 22:59:27.773905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.680 [2024-12-09 22:59:27.875214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.680 [2024-12-09 22:59:27.875215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.247 22:59:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.247 22:59:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:17:53.247 22:59:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:53.504 Malloc0 00:17:53.504 22:59:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:53.761 Malloc1 00:17:53.761 22:59:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:53.761 22:59:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:53.761 /dev/nbd0 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:54.020 1+0 records in 00:17:54.020 1+0 records out 00:17:54.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327545 s, 12.5 MB/s 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:54.020 /dev/nbd1 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:54.020 22:59:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:54.020 1+0 records in 00:17:54.020 1+0 records out 00:17:54.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243357 s, 16.8 MB/s 00:17:54.020 22:59:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:54.278 22:59:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:54.278 22:59:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:54.278 22:59:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:54.278 22:59:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:54.278 { 00:17:54.278 "nbd_device": "/dev/nbd0", 00:17:54.278 "bdev_name": "Malloc0" 00:17:54.278 }, 00:17:54.278 { 00:17:54.278 "nbd_device": "/dev/nbd1", 00:17:54.278 "bdev_name": "Malloc1" 00:17:54.278 } 00:17:54.278 ]' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:54.278 { 00:17:54.278 "nbd_device": "/dev/nbd0", 00:17:54.278 "bdev_name": "Malloc0" 00:17:54.278 }, 00:17:54.278 { 00:17:54.278 "nbd_device": "/dev/nbd1", 00:17:54.278 "bdev_name": "Malloc1" 00:17:54.278 } 00:17:54.278 ]' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:54.278 /dev/nbd1' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:54.278 /dev/nbd1' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:54.278 256+0 records in 00:17:54.278 256+0 records out 00:17:54.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.006345 s, 165 MB/s 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:54.278 22:59:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:54.536 256+0 records in 00:17:54.536 256+0 records out 00:17:54.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210278 s, 49.9 MB/s 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:54.536 256+0 records in 00:17:54.536 256+0 records out 00:17:54.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189676 s, 55.3 MB/s 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:54.536 22:59:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.537 22:59:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:54.794 22:59:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:54.794 22:59:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.794 22:59:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.794 22:59:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:54.794 22:59:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:55.052 22:59:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:55.052 22:59:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:55.617 22:59:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:56.182 [2024-12-09 22:59:31.429048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.182 [2024-12-09 22:59:31.509915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.182 [2024-12-09 22:59:31.509928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.439 [2024-12-09 22:59:31.612406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:56.439 [2024-12-09 22:59:31.612461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:58.965 spdk_app_start Round 1 00:17:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:58.965 22:59:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:58.965 22:59:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:17:58.965 22:59:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57379 /var/tmp/spdk-nbd.sock 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57379 ']' 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.965 22:59:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:17:58.965 22:59:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:58.965 Malloc0 00:17:58.965 22:59:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:59.223 Malloc1 00:17:59.223 22:59:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.223 22:59:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:59.481 /dev/nbd0 00:17:59.481 22:59:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:59.481 22:59:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:59.481 1+0 records in 00:17:59.481 1+0 records out 00:17:59.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235918 s, 17.4 MB/s 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.481 22:59:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:59.481 22:59:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.481 22:59:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.481 22:59:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:59.740 /dev/nbd1 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:59.740 1+0 records in 00:17:59.740 1+0 records out 00:17:59.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028831 s, 14.2 MB/s 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:59.740 22:59:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:59.740 22:59:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:59.998 { 00:17:59.998 "nbd_device": "/dev/nbd0", 00:17:59.998 "bdev_name": "Malloc0" 00:17:59.998 }, 00:17:59.998 { 00:17:59.998 "nbd_device": "/dev/nbd1", 00:17:59.998 "bdev_name": "Malloc1" 00:17:59.998 } 00:17:59.998 ]' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:59.998 { 00:17:59.998 "nbd_device": "/dev/nbd0", 00:17:59.998 "bdev_name": "Malloc0" 00:17:59.998 }, 00:17:59.998 { 00:17:59.998 "nbd_device": "/dev/nbd1", 00:17:59.998 "bdev_name": "Malloc1" 00:17:59.998 } 00:17:59.998 ]' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:59.998 /dev/nbd1' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:59.998 /dev/nbd1' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:59.998 256+0 records in 00:17:59.998 256+0 records out 00:17:59.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732179 s, 143 MB/s 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:59.998 256+0 records in 00:17:59.998 256+0 records out 00:17:59.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133985 s, 78.3 MB/s 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:59.998 256+0 records in 00:17:59.998 256+0 records out 00:17:59.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162201 s, 64.6 MB/s 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:59.998 22:59:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:00.255 22:59:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.256 22:59:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:00.513 22:59:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:00.772 22:59:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:00.772 22:59:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:01.029 22:59:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:01.594 [2024-12-09 22:59:36.798594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:01.594 [2024-12-09 22:59:36.881292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.594 [2024-12-09 22:59:36.881308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.853 [2024-12-09 22:59:36.987028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:01.853 [2024-12-09 22:59:36.987091] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:04.403 spdk_app_start Round 2 00:18:04.403 22:59:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:04.403 22:59:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:18:04.403 22:59:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57379 /var/tmp/spdk-nbd.sock 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57379 ']' 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.403 22:59:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:04.403 22:59:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:04.403 Malloc0 00:18:04.403 22:59:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:04.661 Malloc1 00:18:04.661 22:59:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:04.661 22:59:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.661 22:59:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.662 22:59:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:04.920 /dev/nbd0 00:18:04.920 22:59:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.920 22:59:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:04.920 1+0 records in 00:18:04.920 1+0 records out 00:18:04.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264434 s, 15.5 MB/s 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.920 22:59:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:04.920 22:59:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.920 22:59:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.920 22:59:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:05.178 /dev/nbd1 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:05.178 1+0 records in 00:18:05.178 1+0 records out 00:18:05.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283763 s, 14.4 MB/s 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:05.178 22:59:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.178 22:59:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:05.435 { 00:18:05.435 "nbd_device": "/dev/nbd0", 00:18:05.435 "bdev_name": "Malloc0" 00:18:05.435 }, 00:18:05.435 { 00:18:05.435 "nbd_device": "/dev/nbd1", 00:18:05.435 "bdev_name": "Malloc1" 00:18:05.435 } 00:18:05.435 ]' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:05.435 { 00:18:05.435 "nbd_device": "/dev/nbd0", 00:18:05.435 "bdev_name": "Malloc0" 00:18:05.435 }, 00:18:05.435 { 00:18:05.435 "nbd_device": "/dev/nbd1", 00:18:05.435 "bdev_name": "Malloc1" 00:18:05.435 } 00:18:05.435 ]' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:05.435 /dev/nbd1' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:05.435 /dev/nbd1' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:05.435 256+0 records in 00:18:05.435 256+0 records out 00:18:05.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725688 s, 144 MB/s 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:05.435 256+0 records in 00:18:05.435 256+0 records out 00:18:05.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132433 s, 79.2 MB/s 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:05.435 256+0 records in 00:18:05.435 256+0 records out 00:18:05.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173011 s, 60.6 MB/s 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.435 22:59:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.692 22:59:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.950 22:59:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:06.207 22:59:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:06.207 22:59:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:06.465 22:59:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:07.030 [2024-12-09 22:59:42.288589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:07.030 [2024-12-09 22:59:42.367355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.030 [2024-12-09 22:59:42.367362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.288 [2024-12-09 22:59:42.467237] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:07.288 [2024-12-09 22:59:42.467302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:09.816 22:59:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57379 /var/tmp/spdk-nbd.sock 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57379 ']' 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:09.816 22:59:44 event.app_repeat -- event/event.sh@39 -- # killprocess 57379 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57379 ']' 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57379 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57379 00:18:09.816 killing process with pid 57379 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57379' 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57379 00:18:09.816 22:59:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57379 00:18:10.381 spdk_app_start is called in Round 0. 00:18:10.381 Shutdown signal received, stop current app iteration 00:18:10.381 Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 reinitialization... 00:18:10.381 spdk_app_start is called in Round 1. 00:18:10.381 Shutdown signal received, stop current app iteration 00:18:10.381 Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 reinitialization... 00:18:10.381 spdk_app_start is called in Round 2. 00:18:10.381 Shutdown signal received, stop current app iteration 00:18:10.381 Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 reinitialization... 00:18:10.381 spdk_app_start is called in Round 3. 00:18:10.381 Shutdown signal received, stop current app iteration 00:18:10.381 ************************************ 00:18:10.381 END TEST app_repeat 00:18:10.381 ************************************ 00:18:10.381 22:59:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:18:10.381 22:59:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:18:10.381 00:18:10.381 real 0m17.926s 00:18:10.381 user 0m39.366s 00:18:10.381 sys 0m2.129s 00:18:10.381 22:59:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.381 22:59:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:10.381 22:59:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:18:10.381 22:59:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:10.381 22:59:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:10.381 22:59:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.381 22:59:45 event -- common/autotest_common.sh@10 -- # set +x 00:18:10.381 ************************************ 00:18:10.381 START TEST cpu_locks 00:18:10.381 ************************************ 00:18:10.381 22:59:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:10.381 * Looking for test storage... 00:18:10.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.382 22:59:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:10.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.382 --rc genhtml_branch_coverage=1 00:18:10.382 --rc genhtml_function_coverage=1 00:18:10.382 --rc genhtml_legend=1 00:18:10.382 --rc geninfo_all_blocks=1 00:18:10.382 --rc geninfo_unexecuted_blocks=1 00:18:10.382 00:18:10.382 ' 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:10.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.382 --rc genhtml_branch_coverage=1 00:18:10.382 --rc genhtml_function_coverage=1 00:18:10.382 --rc genhtml_legend=1 00:18:10.382 --rc geninfo_all_blocks=1 00:18:10.382 --rc geninfo_unexecuted_blocks=1 00:18:10.382 00:18:10.382 ' 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:10.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.382 --rc genhtml_branch_coverage=1 00:18:10.382 --rc genhtml_function_coverage=1 00:18:10.382 --rc genhtml_legend=1 00:18:10.382 --rc geninfo_all_blocks=1 00:18:10.382 --rc geninfo_unexecuted_blocks=1 00:18:10.382 00:18:10.382 ' 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:10.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.382 --rc genhtml_branch_coverage=1 00:18:10.382 --rc genhtml_function_coverage=1 00:18:10.382 --rc genhtml_legend=1 00:18:10.382 --rc geninfo_all_blocks=1 00:18:10.382 --rc geninfo_unexecuted_blocks=1 00:18:10.382 00:18:10.382 ' 00:18:10.382 22:59:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:18:10.382 22:59:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:18:10.382 22:59:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:18:10.382 22:59:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.382 22:59:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:10.382 ************************************ 00:18:10.382 START TEST default_locks 00:18:10.382 ************************************ 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:18:10.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57810 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57810 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57810 ']' 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:10.382 22:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:10.640 [2024-12-09 22:59:45.783348] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:10.640 [2024-12-09 22:59:45.783473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57810 ] 00:18:10.640 [2024-12-09 22:59:45.940971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.898 [2024-12-09 22:59:46.041523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57810 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57810 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57810 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 57810 ']' 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 57810 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.463 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57810 00:18:11.721 killing process with pid 57810 00:18:11.721 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.721 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.721 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57810' 00:18:11.721 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 57810 00:18:11.721 22:59:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 57810 00:18:13.095 22:59:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57810 00:18:13.095 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:18:13.095 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 57810 00:18:13.095 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:13.095 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 57810 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57810 ']' 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.096 ERROR: process (pid: 57810) is no longer running 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:13.096 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (57810) - No such process 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:13.096 00:18:13.096 real 0m2.641s 00:18:13.096 user 0m2.631s 00:18:13.096 sys 0m0.443s 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.096 22:59:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:13.096 ************************************ 00:18:13.096 END TEST default_locks 00:18:13.096 ************************************ 00:18:13.096 22:59:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:18:13.096 22:59:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.096 22:59:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.096 22:59:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:13.096 ************************************ 00:18:13.096 START TEST default_locks_via_rpc 00:18:13.096 ************************************ 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57874 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57874 00:18:13.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 57874 ']' 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:13.096 22:59:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.096 [2024-12-09 22:59:48.452254] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:13.096 [2024-12-09 22:59:48.452502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57874 ] 00:18:13.354 [2024-12-09 22:59:48.603293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.354 [2024-12-09 22:59:48.704725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57874 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57874 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57874 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 57874 ']' 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 57874 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57874 00:18:14.285 killing process with pid 57874 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57874' 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 57874 00:18:14.285 22:59:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 57874 00:18:15.718 ************************************ 00:18:15.718 END TEST default_locks_via_rpc 00:18:15.718 ************************************ 00:18:15.718 00:18:15.718 real 0m2.671s 00:18:15.718 user 0m2.680s 00:18:15.718 sys 0m0.432s 00:18:15.718 22:59:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.718 22:59:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.975 22:59:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:18:15.975 22:59:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:15.975 22:59:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.975 22:59:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:15.975 ************************************ 00:18:15.975 START TEST non_locking_app_on_locked_coremask 00:18:15.975 ************************************ 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57926 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 57926 /var/tmp/spdk.sock 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 57926 ']' 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:15.975 22:59:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:15.975 [2024-12-09 22:59:51.174693] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:15.975 [2024-12-09 22:59:51.174813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57926 ] 00:18:15.975 [2024-12-09 22:59:51.331490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.232 [2024-12-09 22:59:51.433692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57942 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 57942 /var/tmp/spdk2.sock 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 57942 ']' 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.796 22:59:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:16.796 [2024-12-09 22:59:52.086695] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:16.796 [2024-12-09 22:59:52.086955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57942 ] 00:18:17.054 [2024-12-09 22:59:52.256085] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:17.054 [2024-12-09 22:59:52.256159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.311 [2024-12-09 22:59:52.456273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.275 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.275 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:18.275 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 57926 00:18:18.276 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 57926 00:18:18.276 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 57926 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 57926 ']' 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 57926 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57926 00:18:18.534 killing process with pid 57926 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57926' 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 57926 00:18:18.534 22:59:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 57926 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 57942 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 57942 ']' 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 57942 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57942 00:18:21.061 killing process with pid 57942 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57942' 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 57942 00:18:21.061 22:59:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 57942 00:18:22.434 00:18:22.434 real 0m6.410s 00:18:22.434 user 0m6.599s 00:18:22.434 sys 0m0.840s 00:18:22.434 22:59:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.434 ************************************ 00:18:22.434 END TEST non_locking_app_on_locked_coremask 00:18:22.434 ************************************ 00:18:22.434 22:59:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:22.434 22:59:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:18:22.434 22:59:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.434 22:59:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.434 22:59:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:22.434 ************************************ 00:18:22.434 START TEST locking_app_on_unlocked_coremask 00:18:22.434 ************************************ 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58044 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58044 /var/tmp/spdk.sock 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58044 ']' 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:22.434 22:59:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:18:22.434 [2024-12-09 22:59:57.623210] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:22.434 [2024-12-09 22:59:57.623339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 00:18:22.434 [2024-12-09 22:59:57.779958] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:22.434 [2024-12-09 22:59:57.780012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.692 [2024-12-09 22:59:57.865723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:23.257 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58049 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58049 /var/tmp/spdk2.sock 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58049 ']' 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.258 22:59:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:23.258 [2024-12-09 22:59:58.480942] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:23.258 [2024-12-09 22:59:58.481300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58049 ] 00:18:23.515 [2024-12-09 22:59:58.642976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.515 [2024-12-09 22:59:58.816691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.450 22:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.450 22:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:24.450 22:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58049 00:18:24.450 22:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58049 00:18:24.450 22:59:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58044 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58044 ']' 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58044 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58044 00:18:24.709 killing process with pid 58044 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58044' 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58044 00:18:24.709 23:00:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58044 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58049 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58049 ']' 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58049 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58049 00:18:27.236 killing process with pid 58049 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58049' 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58049 00:18:27.236 23:00:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58049 00:18:28.603 00:18:28.603 real 0m6.222s 00:18:28.603 user 0m6.421s 00:18:28.603 sys 0m0.838s 00:18:28.603 23:00:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.603 23:00:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:28.603 ************************************ 00:18:28.603 END TEST locking_app_on_unlocked_coremask 00:18:28.603 ************************************ 00:18:28.603 23:00:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:28.603 23:00:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:28.603 23:00:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.603 23:00:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:28.603 ************************************ 00:18:28.603 START TEST locking_app_on_locked_coremask 00:18:28.603 ************************************ 00:18:28.603 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:18:28.603 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58151 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58151 /var/tmp/spdk.sock 00:18:28.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58151 ']' 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:28.604 23:00:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:28.604 [2024-12-09 23:00:03.878280] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:28.604 [2024-12-09 23:00:03.878403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58151 ] 00:18:28.873 [2024-12-09 23:00:04.038768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.873 [2024-12-09 23:00:04.137785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58166 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58166 /var/tmp/spdk2.sock 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58166 /var/tmp/spdk2.sock 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58166 /var/tmp/spdk2.sock 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58166 ']' 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:29.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.475 23:00:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:29.475 [2024-12-09 23:00:04.798071] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:29.475 [2024-12-09 23:00:04.798335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:18:29.732 [2024-12-09 23:00:04.975537] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58151 has claimed it. 00:18:29.732 [2024-12-09 23:00:04.975606] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:30.306 ERROR: process (pid: 58166) is no longer running 00:18:30.306 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58166) - No such process 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58151 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:30.306 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58151 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58151 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58151 ']' 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58151 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58151 00:18:30.564 killing process with pid 58151 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58151' 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58151 00:18:30.564 23:00:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58151 00:18:31.932 00:18:31.932 real 0m3.264s 00:18:31.932 user 0m3.487s 00:18:31.932 sys 0m0.549s 00:18:31.932 23:00:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.932 ************************************ 00:18:31.932 END TEST locking_app_on_locked_coremask 00:18:31.932 23:00:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:31.932 ************************************ 00:18:31.932 23:00:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:31.932 23:00:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.932 23:00:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.932 23:00:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:31.932 ************************************ 00:18:31.932 START TEST locking_overlapped_coremask 00:18:31.932 ************************************ 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58220 00:18:31.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58220 /var/tmp/spdk.sock 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58220 ']' 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.932 23:00:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:31.932 [2024-12-09 23:00:07.183737] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:31.932 [2024-12-09 23:00:07.183858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58220 ] 00:18:32.189 [2024-12-09 23:00:07.338972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.189 [2024-12-09 23:00:07.425081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.189 [2024-12-09 23:00:07.425145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.189 [2024-12-09 23:00:07.425198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58238 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58238 /var/tmp/spdk2.sock 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58238 /var/tmp/spdk2.sock 00:18:32.753 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58238 /var/tmp/spdk2.sock 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58238 ']' 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:32.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.754 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:32.754 [2024-12-09 23:00:08.084086] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:32.754 [2024-12-09 23:00:08.084378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58238 ] 00:18:33.011 [2024-12-09 23:00:08.255907] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58220 has claimed it. 00:18:33.011 [2024-12-09 23:00:08.255967] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:33.579 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58238) - No such process 00:18:33.579 ERROR: process (pid: 58238) is no longer running 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58220 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58220 ']' 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58220 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58220 00:18:33.579 killing process with pid 58220 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58220' 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58220 00:18:33.579 23:00:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58220 00:18:34.947 ************************************ 00:18:34.947 END TEST locking_overlapped_coremask 00:18:34.947 ************************************ 00:18:34.947 00:18:34.947 real 0m2.912s 00:18:34.947 user 0m7.965s 00:18:34.947 sys 0m0.443s 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:34.947 23:00:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:18:34.947 23:00:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:34.947 23:00:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.947 23:00:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:34.947 ************************************ 00:18:34.947 START TEST locking_overlapped_coremask_via_rpc 00:18:34.947 ************************************ 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:18:34.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58291 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58291 /var/tmp/spdk.sock 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58291 ']' 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.947 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:18:34.947 [2024-12-09 23:00:10.132356] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:34.947 [2024-12-09 23:00:10.132663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58291 ] 00:18:34.947 [2024-12-09 23:00:10.287955] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:34.947 [2024-12-09 23:00:10.288000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.202 [2024-12-09 23:00:10.380297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.202 [2024-12-09 23:00:10.380399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.202 [2024-12-09 23:00:10.380642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58308 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58308 /var/tmp/spdk2.sock 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58308 ']' 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:18:35.772 23:00:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:35.772 [2024-12-09 23:00:11.036930] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:35.772 [2024-12-09 23:00:11.037529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58308 ] 00:18:36.029 [2024-12-09 23:00:11.210989] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:36.029 [2024-12-09 23:00:11.211047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.286 [2024-12-09 23:00:11.419168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.286 [2024-12-09 23:00:11.422176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.286 [2024-12-09 23:00:11.422196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.227 [2024-12-09 23:00:12.570232] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58291 has claimed it. 00:18:37.227 request: 00:18:37.227 { 00:18:37.227 "method": "framework_enable_cpumask_locks", 00:18:37.227 "req_id": 1 00:18:37.227 } 00:18:37.227 Got JSON-RPC error response 00:18:37.227 response: 00:18:37.227 { 00:18:37.227 "code": -32603, 00:18:37.227 "message": "Failed to claim CPU core: 2" 00:18:37.227 } 00:18:37.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58291 /var/tmp/spdk.sock 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58291 ']' 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.227 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58308 /var/tmp/spdk2.sock 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58308 ']' 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:37.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.485 23:00:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.742 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:37.743 00:18:37.743 real 0m2.957s 00:18:37.743 user 0m1.095s 00:18:37.743 sys 0m0.114s 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.743 23:00:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.743 ************************************ 00:18:37.743 END TEST locking_overlapped_coremask_via_rpc 00:18:37.743 ************************************ 00:18:37.743 23:00:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:18:37.743 23:00:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58291 ]] 00:18:37.743 23:00:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58291 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58291 ']' 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58291 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58291 00:18:37.743 killing process with pid 58291 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58291' 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58291 00:18:37.743 23:00:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58291 00:18:39.115 23:00:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58308 ]] 00:18:39.115 23:00:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58308 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58308 ']' 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58308 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58308 00:18:39.115 killing process with pid 58308 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58308' 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58308 00:18:39.115 23:00:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58308 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58291 ]] 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58291 00:18:40.491 Process with pid 58291 is not found 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58291 ']' 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58291 00:18:40.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58291) - No such process 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58291 is not found' 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58308 ]] 00:18:40.491 Process with pid 58308 is not found 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58308 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58308 ']' 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58308 00:18:40.491 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58308) - No such process 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58308 is not found' 00:18:40.491 23:00:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:40.491 ************************************ 00:18:40.491 END TEST cpu_locks 00:18:40.491 ************************************ 00:18:40.491 00:18:40.491 real 0m30.037s 00:18:40.491 user 0m51.590s 00:18:40.491 sys 0m4.461s 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.491 23:00:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:40.491 ************************************ 00:18:40.491 END TEST event 00:18:40.491 ************************************ 00:18:40.491 00:18:40.491 real 0m57.421s 00:18:40.491 user 1m46.011s 00:18:40.491 sys 0m7.401s 00:18:40.491 23:00:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.491 23:00:15 event -- common/autotest_common.sh@10 -- # set +x 00:18:40.491 23:00:15 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:40.491 23:00:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.491 23:00:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.491 23:00:15 -- common/autotest_common.sh@10 -- # set +x 00:18:40.491 ************************************ 00:18:40.491 START TEST thread 00:18:40.491 ************************************ 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:40.491 * Looking for test storage... 00:18:40.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.491 23:00:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.491 23:00:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.491 23:00:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.491 23:00:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.491 23:00:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.491 23:00:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.491 23:00:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.491 23:00:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.491 23:00:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.491 23:00:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.491 23:00:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.491 23:00:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:18:40.491 23:00:15 thread -- scripts/common.sh@345 -- # : 1 00:18:40.491 23:00:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.491 23:00:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.491 23:00:15 thread -- scripts/common.sh@365 -- # decimal 1 00:18:40.491 23:00:15 thread -- scripts/common.sh@353 -- # local d=1 00:18:40.491 23:00:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.491 23:00:15 thread -- scripts/common.sh@355 -- # echo 1 00:18:40.491 23:00:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.491 23:00:15 thread -- scripts/common.sh@366 -- # decimal 2 00:18:40.491 23:00:15 thread -- scripts/common.sh@353 -- # local d=2 00:18:40.491 23:00:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.491 23:00:15 thread -- scripts/common.sh@355 -- # echo 2 00:18:40.491 23:00:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.491 23:00:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.491 23:00:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.491 23:00:15 thread -- scripts/common.sh@368 -- # return 0 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.491 --rc genhtml_branch_coverage=1 00:18:40.491 --rc genhtml_function_coverage=1 00:18:40.491 --rc genhtml_legend=1 00:18:40.491 --rc geninfo_all_blocks=1 00:18:40.491 --rc geninfo_unexecuted_blocks=1 00:18:40.491 00:18:40.491 ' 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.491 --rc genhtml_branch_coverage=1 00:18:40.491 --rc genhtml_function_coverage=1 00:18:40.491 --rc genhtml_legend=1 00:18:40.491 --rc geninfo_all_blocks=1 00:18:40.491 --rc geninfo_unexecuted_blocks=1 00:18:40.491 00:18:40.491 ' 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.491 --rc genhtml_branch_coverage=1 00:18:40.491 --rc genhtml_function_coverage=1 00:18:40.491 --rc genhtml_legend=1 00:18:40.491 --rc geninfo_all_blocks=1 00:18:40.491 --rc geninfo_unexecuted_blocks=1 00:18:40.491 00:18:40.491 ' 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.491 --rc genhtml_branch_coverage=1 00:18:40.491 --rc genhtml_function_coverage=1 00:18:40.491 --rc genhtml_legend=1 00:18:40.491 --rc geninfo_all_blocks=1 00:18:40.491 --rc geninfo_unexecuted_blocks=1 00:18:40.491 00:18:40.491 ' 00:18:40.491 23:00:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.491 23:00:15 thread -- common/autotest_common.sh@10 -- # set +x 00:18:40.491 ************************************ 00:18:40.491 START TEST thread_poller_perf 00:18:40.491 ************************************ 00:18:40.491 23:00:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:40.491 [2024-12-09 23:00:15.821666] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:40.491 [2024-12-09 23:00:15.821846] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58464 ] 00:18:40.767 [2024-12-09 23:00:15.967785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.767 [2024-12-09 23:00:16.070644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.767 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:42.166 [2024-12-09T23:00:17.529Z] ====================================== 00:18:42.166 [2024-12-09T23:00:17.529Z] busy:2608644064 (cyc) 00:18:42.166 [2024-12-09T23:00:17.529Z] total_run_count: 306000 00:18:42.166 [2024-12-09T23:00:17.529Z] tsc_hz: 2600000000 (cyc) 00:18:42.166 [2024-12-09T23:00:17.529Z] ====================================== 00:18:42.166 [2024-12-09T23:00:17.529Z] poller_cost: 8524 (cyc), 3278 (nsec) 00:18:42.166 00:18:42.166 real 0m1.441s 00:18:42.166 user 0m1.273s 00:18:42.166 sys 0m0.059s 00:18:42.166 23:00:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.166 ************************************ 00:18:42.166 END TEST thread_poller_perf 00:18:42.166 ************************************ 00:18:42.166 23:00:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:42.166 23:00:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:42.166 23:00:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:42.166 23:00:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.166 23:00:17 thread -- common/autotest_common.sh@10 -- # set +x 00:18:42.166 ************************************ 00:18:42.166 START TEST thread_poller_perf 00:18:42.166 ************************************ 00:18:42.166 23:00:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:42.166 [2024-12-09 23:00:17.320954] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:42.166 [2024-12-09 23:00:17.321207] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58500 ] 00:18:42.166 [2024-12-09 23:00:17.480323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.424 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:42.424 [2024-12-09 23:00:17.578796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.833 [2024-12-09T23:00:19.196Z] ====================================== 00:18:43.833 [2024-12-09T23:00:19.196Z] busy:2603662670 (cyc) 00:18:43.833 [2024-12-09T23:00:19.196Z] total_run_count: 3646000 00:18:43.833 [2024-12-09T23:00:19.196Z] tsc_hz: 2600000000 (cyc) 00:18:43.833 [2024-12-09T23:00:19.196Z] ====================================== 00:18:43.833 [2024-12-09T23:00:19.196Z] poller_cost: 714 (cyc), 274 (nsec) 00:18:43.833 00:18:43.833 real 0m1.451s 00:18:43.833 user 0m1.272s 00:18:43.833 sys 0m0.072s 00:18:43.833 23:00:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.833 23:00:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:43.833 ************************************ 00:18:43.833 END TEST thread_poller_perf 00:18:43.833 ************************************ 00:18:43.833 23:00:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:43.833 00:18:43.833 real 0m3.118s 00:18:43.833 user 0m2.652s 00:18:43.833 sys 0m0.243s 00:18:43.833 ************************************ 00:18:43.833 END TEST thread 00:18:43.833 ************************************ 00:18:43.833 23:00:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.833 23:00:18 thread -- common/autotest_common.sh@10 -- # set +x 00:18:43.833 23:00:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:18:43.833 23:00:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:43.833 23:00:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.833 23:00:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.833 23:00:18 -- common/autotest_common.sh@10 -- # set +x 00:18:43.833 ************************************ 00:18:43.833 START TEST app_cmdline 00:18:43.833 ************************************ 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:43.833 * Looking for test storage... 00:18:43.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:18:43.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.833 23:00:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:43.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.833 --rc genhtml_branch_coverage=1 00:18:43.833 --rc genhtml_function_coverage=1 00:18:43.833 --rc genhtml_legend=1 00:18:43.833 --rc geninfo_all_blocks=1 00:18:43.833 --rc geninfo_unexecuted_blocks=1 00:18:43.833 00:18:43.833 ' 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:43.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.833 --rc genhtml_branch_coverage=1 00:18:43.833 --rc genhtml_function_coverage=1 00:18:43.833 --rc genhtml_legend=1 00:18:43.833 --rc geninfo_all_blocks=1 00:18:43.833 --rc geninfo_unexecuted_blocks=1 00:18:43.833 00:18:43.833 ' 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:43.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.833 --rc genhtml_branch_coverage=1 00:18:43.833 --rc genhtml_function_coverage=1 00:18:43.833 --rc genhtml_legend=1 00:18:43.833 --rc geninfo_all_blocks=1 00:18:43.833 --rc geninfo_unexecuted_blocks=1 00:18:43.833 00:18:43.833 ' 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:43.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.833 --rc genhtml_branch_coverage=1 00:18:43.833 --rc genhtml_function_coverage=1 00:18:43.833 --rc genhtml_legend=1 00:18:43.833 --rc geninfo_all_blocks=1 00:18:43.833 --rc geninfo_unexecuted_blocks=1 00:18:43.833 00:18:43.833 ' 00:18:43.833 23:00:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:43.833 23:00:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58584 00:18:43.833 23:00:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58584 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58584 ']' 00:18:43.833 23:00:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.833 23:00:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:43.833 [2024-12-09 23:00:19.015946] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:43.833 [2024-12-09 23:00:19.016071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:18:43.833 [2024-12-09 23:00:19.172065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.091 [2024-12-09 23:00:19.275042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.666 23:00:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.666 23:00:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:18:44.666 23:00:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:44.924 { 00:18:44.924 "version": "SPDK v25.01-pre git sha1 43c35d804", 00:18:44.924 "fields": { 00:18:44.924 "major": 25, 00:18:44.924 "minor": 1, 00:18:44.924 "patch": 0, 00:18:44.924 "suffix": "-pre", 00:18:44.924 "commit": "43c35d804" 00:18:44.924 } 00:18:44.924 } 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:44.924 23:00:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:44.924 23:00:20 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:45.184 request: 00:18:45.184 { 00:18:45.184 "method": "env_dpdk_get_mem_stats", 00:18:45.184 "req_id": 1 00:18:45.184 } 00:18:45.184 Got JSON-RPC error response 00:18:45.184 response: 00:18:45.184 { 00:18:45.184 "code": -32601, 00:18:45.184 "message": "Method not found" 00:18:45.184 } 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.184 23:00:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58584 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58584 ']' 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58584 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58584 00:18:45.184 killing process with pid 58584 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58584' 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@973 -- # kill 58584 00:18:45.184 23:00:20 app_cmdline -- common/autotest_common.sh@978 -- # wait 58584 00:18:46.560 ************************************ 00:18:46.560 END TEST app_cmdline 00:18:46.560 ************************************ 00:18:46.560 00:18:46.560 real 0m3.036s 00:18:46.560 user 0m3.364s 00:18:46.560 sys 0m0.419s 00:18:46.560 23:00:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.560 23:00:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:46.560 23:00:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:46.560 23:00:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:46.560 23:00:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.560 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:18:46.560 ************************************ 00:18:46.560 START TEST version 00:18:46.560 ************************************ 00:18:46.560 23:00:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:46.818 * Looking for test storage... 00:18:46.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:46.818 23:00:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:46.818 23:00:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:46.818 23:00:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:18:46.818 23:00:22 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:46.818 23:00:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.818 23:00:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.818 23:00:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.818 23:00:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.818 23:00:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.818 23:00:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.818 23:00:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.818 23:00:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.818 23:00:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.818 23:00:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.818 23:00:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.819 23:00:22 version -- scripts/common.sh@344 -- # case "$op" in 00:18:46.819 23:00:22 version -- scripts/common.sh@345 -- # : 1 00:18:46.819 23:00:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.819 23:00:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.819 23:00:22 version -- scripts/common.sh@365 -- # decimal 1 00:18:46.819 23:00:22 version -- scripts/common.sh@353 -- # local d=1 00:18:46.819 23:00:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.819 23:00:22 version -- scripts/common.sh@355 -- # echo 1 00:18:46.819 23:00:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.819 23:00:22 version -- scripts/common.sh@366 -- # decimal 2 00:18:46.819 23:00:22 version -- scripts/common.sh@353 -- # local d=2 00:18:46.819 23:00:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.819 23:00:22 version -- scripts/common.sh@355 -- # echo 2 00:18:46.819 23:00:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.819 23:00:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.819 23:00:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.819 23:00:22 version -- scripts/common.sh@368 -- # return 0 00:18:46.819 23:00:22 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.819 23:00:22 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.819 --rc genhtml_branch_coverage=1 00:18:46.819 --rc genhtml_function_coverage=1 00:18:46.819 --rc genhtml_legend=1 00:18:46.819 --rc geninfo_all_blocks=1 00:18:46.819 --rc geninfo_unexecuted_blocks=1 00:18:46.819 00:18:46.819 ' 00:18:46.819 23:00:22 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.819 --rc genhtml_branch_coverage=1 00:18:46.819 --rc genhtml_function_coverage=1 00:18:46.819 --rc genhtml_legend=1 00:18:46.819 --rc geninfo_all_blocks=1 00:18:46.819 --rc geninfo_unexecuted_blocks=1 00:18:46.819 00:18:46.819 ' 00:18:46.819 23:00:22 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.819 --rc genhtml_branch_coverage=1 00:18:46.819 --rc genhtml_function_coverage=1 00:18:46.819 --rc genhtml_legend=1 00:18:46.819 --rc geninfo_all_blocks=1 00:18:46.819 --rc geninfo_unexecuted_blocks=1 00:18:46.819 00:18:46.819 ' 00:18:46.819 23:00:22 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:46.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.819 --rc genhtml_branch_coverage=1 00:18:46.819 --rc genhtml_function_coverage=1 00:18:46.819 --rc genhtml_legend=1 00:18:46.819 --rc geninfo_all_blocks=1 00:18:46.819 --rc geninfo_unexecuted_blocks=1 00:18:46.819 00:18:46.819 ' 00:18:46.819 23:00:22 version -- app/version.sh@17 -- # get_header_version major 00:18:46.819 23:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # cut -f2 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:18:46.819 23:00:22 version -- app/version.sh@17 -- # major=25 00:18:46.819 23:00:22 version -- app/version.sh@18 -- # get_header_version minor 00:18:46.819 23:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # cut -f2 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:18:46.819 23:00:22 version -- app/version.sh@18 -- # minor=1 00:18:46.819 23:00:22 version -- app/version.sh@19 -- # get_header_version patch 00:18:46.819 23:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # cut -f2 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:18:46.819 23:00:22 version -- app/version.sh@19 -- # patch=0 00:18:46.819 23:00:22 version -- app/version.sh@20 -- # get_header_version suffix 00:18:46.819 23:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # cut -f2 00:18:46.819 23:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:18:46.819 23:00:22 version -- app/version.sh@20 -- # suffix=-pre 00:18:46.819 23:00:22 version -- app/version.sh@22 -- # version=25.1 00:18:46.819 23:00:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:46.819 23:00:22 version -- app/version.sh@28 -- # version=25.1rc0 00:18:46.819 23:00:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:46.819 23:00:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:46.819 23:00:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:18:46.819 23:00:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:18:46.819 00:18:46.819 real 0m0.196s 00:18:46.819 user 0m0.120s 00:18:46.819 sys 0m0.103s 00:18:46.819 ************************************ 00:18:46.819 END TEST version 00:18:46.819 ************************************ 00:18:46.819 23:00:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.819 23:00:22 version -- common/autotest_common.sh@10 -- # set +x 00:18:46.819 23:00:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:18:46.819 23:00:22 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:18:46.819 23:00:22 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:46.819 23:00:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:46.819 23:00:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.819 23:00:22 -- common/autotest_common.sh@10 -- # set +x 00:18:46.819 ************************************ 00:18:46.819 START TEST bdev_raid 00:18:46.819 ************************************ 00:18:46.819 23:00:22 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:47.077 * Looking for test storage... 00:18:47.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:47.077 23:00:22 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.077 23:00:22 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:47.077 23:00:22 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.077 23:00:22 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.077 23:00:22 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@345 -- # : 1 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.078 23:00:22 bdev_raid -- scripts/common.sh@368 -- # return 0 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:47.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.078 --rc genhtml_branch_coverage=1 00:18:47.078 --rc genhtml_function_coverage=1 00:18:47.078 --rc genhtml_legend=1 00:18:47.078 --rc geninfo_all_blocks=1 00:18:47.078 --rc geninfo_unexecuted_blocks=1 00:18:47.078 00:18:47.078 ' 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:47.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.078 --rc genhtml_branch_coverage=1 00:18:47.078 --rc genhtml_function_coverage=1 00:18:47.078 --rc genhtml_legend=1 00:18:47.078 --rc geninfo_all_blocks=1 00:18:47.078 --rc geninfo_unexecuted_blocks=1 00:18:47.078 00:18:47.078 ' 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:47.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.078 --rc genhtml_branch_coverage=1 00:18:47.078 --rc genhtml_function_coverage=1 00:18:47.078 --rc genhtml_legend=1 00:18:47.078 --rc geninfo_all_blocks=1 00:18:47.078 --rc geninfo_unexecuted_blocks=1 00:18:47.078 00:18:47.078 ' 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:47.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.078 --rc genhtml_branch_coverage=1 00:18:47.078 --rc genhtml_function_coverage=1 00:18:47.078 --rc genhtml_legend=1 00:18:47.078 --rc geninfo_all_blocks=1 00:18:47.078 --rc geninfo_unexecuted_blocks=1 00:18:47.078 00:18:47.078 ' 00:18:47.078 23:00:22 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:47.078 23:00:22 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:18:47.078 23:00:22 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:18:47.078 23:00:22 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:18:47.078 23:00:22 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:18:47.078 23:00:22 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:18:47.078 23:00:22 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.078 23:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.078 ************************************ 00:18:47.078 START TEST raid1_resize_data_offset_test 00:18:47.078 ************************************ 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58755 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58755' 00:18:47.078 Process raid pid: 58755 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58755 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 58755 ']' 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.078 23:00:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.078 [2024-12-09 23:00:22.328866] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:47.078 [2024-12-09 23:00:22.329130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.337 [2024-12-09 23:00:22.481964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.337 [2024-12-09 23:00:22.584018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.595 [2024-12-09 23:00:22.724416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.595 [2024-12-09 23:00:22.724594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 malloc0 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.852 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.111 malloc1 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.111 null0 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.111 [2024-12-09 23:00:23.259245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:18:48.111 [2024-12-09 23:00:23.261140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:48.111 [2024-12-09 23:00:23.261195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:18:48.111 [2024-12-09 23:00:23.261333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:48.111 [2024-12-09 23:00:23.261346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:18:48.111 [2024-12-09 23:00:23.261632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:48.111 [2024-12-09 23:00:23.261776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:48.111 [2024-12-09 23:00:23.261787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:48.111 [2024-12-09 23:00:23.261932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.111 [2024-12-09 23:00:23.299253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.111 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.375 malloc2 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.375 [2024-12-09 23:00:23.673131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:48.375 [2024-12-09 23:00:23.684938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.375 [2024-12-09 23:00:23.686953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58755 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 58755 ']' 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 58755 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.375 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58755 00:18:48.633 killing process with pid 58755 00:18:48.633 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.633 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.633 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58755' 00:18:48.633 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 58755 00:18:48.633 [2024-12-09 23:00:23.748376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.633 23:00:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 58755 00:18:48.633 [2024-12-09 23:00:23.749042] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:18:48.633 [2024-12-09 23:00:23.749247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.633 [2024-12-09 23:00:23.749265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:18:48.633 [2024-12-09 23:00:23.772434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.633 [2024-12-09 23:00:23.772873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.633 [2024-12-09 23:00:23.772896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:49.564 [2024-12-09 23:00:24.871390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.502 23:00:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:18:50.502 00:18:50.502 real 0m3.338s 00:18:50.502 user 0m3.255s 00:18:50.502 sys 0m0.411s 00:18:50.502 23:00:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.502 ************************************ 00:18:50.502 END TEST raid1_resize_data_offset_test 00:18:50.502 ************************************ 00:18:50.502 23:00:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.502 23:00:25 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:18:50.502 23:00:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.502 23:00:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.502 23:00:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.502 ************************************ 00:18:50.502 START TEST raid0_resize_superblock_test 00:18:50.502 ************************************ 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:18:50.502 Process raid pid: 58827 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58827 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58827' 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58827 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 58827 ']' 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.502 23:00:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.502 [2024-12-09 23:00:25.715430] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:50.502 [2024-12-09 23:00:25.715554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.761 [2024-12-09 23:00:25.873479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.761 [2024-12-09 23:00:25.976446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.761 [2024-12-09 23:00:26.114771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.761 [2024-12-09 23:00:26.114812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.336 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.336 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:51.336 23:00:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:51.336 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.336 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.594 malloc0 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.594 [2024-12-09 23:00:26.932294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:51.594 [2024-12-09 23:00:26.932368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.594 [2024-12-09 23:00:26.932390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:51.594 [2024-12-09 23:00:26.932402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.594 [2024-12-09 23:00:26.934689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.594 [2024-12-09 23:00:26.934733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:51.594 pt0 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.594 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 6afc7879-856c-48de-8229-2b1170467065 00:18:51.852 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:51.852 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 469c7e01-f540-4417-b0eb-8d5123b6144a 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 13606bed-14bb-44b2-ad10-29e3d10252a1 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 [2024-12-09 23:00:27.023403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 469c7e01-f540-4417-b0eb-8d5123b6144a is claimed 00:18:51.852 [2024-12-09 23:00:27.023513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 13606bed-14bb-44b2-ad10-29e3d10252a1 is claimed 00:18:51.852 [2024-12-09 23:00:27.023654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:51.852 [2024-12-09 23:00:27.023669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:18:51.852 [2024-12-09 23:00:27.023946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:51.852 [2024-12-09 23:00:27.024142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:51.852 [2024-12-09 23:00:27.024154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:51.852 [2024-12-09 23:00:27.024316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 [2024-12-09 23:00:27.099676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 [2024-12-09 23:00:27.135645] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:51.852 [2024-12-09 23:00:27.135819] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '469c7e01-f540-4417-b0eb-8d5123b6144a' was resized: old size 131072, new size 204800 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 [2024-12-09 23:00:27.143575] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:51.852 [2024-12-09 23:00:27.143600] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '13606bed-14bb-44b2-ad10-29e3d10252a1' was resized: old size 131072, new size 204800 00:18:51.852 [2024-12-09 23:00:27.143632] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.852 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:18:52.111 [2024-12-09 23:00:27.223714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.111 [2024-12-09 23:00:27.255458] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:18:52.111 [2024-12-09 23:00:27.255536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:18:52.111 [2024-12-09 23:00:27.255550] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.111 [2024-12-09 23:00:27.255561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:18:52.111 [2024-12-09 23:00:27.255664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.111 [2024-12-09 23:00:27.255700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.111 [2024-12-09 23:00:27.255711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.111 [2024-12-09 23:00:27.263385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:52.111 [2024-12-09 23:00:27.263436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.111 [2024-12-09 23:00:27.263454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:52.111 [2024-12-09 23:00:27.263465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.111 [2024-12-09 23:00:27.265723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.111 [2024-12-09 23:00:27.265762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:52.111 [2024-12-09 23:00:27.267366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 469c7e01-f540-4417-b0eb-8d5123b6144a 00:18:52.111 [2024-12-09 23:00:27.267420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 469c7e01-f540-4417-b0eb-8d5123b6144a is claimed 00:18:52.111 [2024-12-09 23:00:27.267520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 13606bed-14bb-44b2-ad10-29e3d10252a1 00:18:52.111 [2024-12-09 23:00:27.267537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 13606bed-14bb-44b2-ad10-29e3d10252a1 is claimed 00:18:52.111 [2024-12-09 23:00:27.267689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 13606bed-14bb-44b2-ad10-29e3d10252a1 (2) smaller than existing raid bdev Raid (3) 00:18:52.111 [2024-12-09 23:00:27.267711] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 469c7e01-f540-4417-b0eb-8d5123b6144a: File exists 00:18:52.111 [2024-12-09 23:00:27.267748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:52.111 [2024-12-09 23:00:27.267758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:18:52.111 pt0 00:18:52.111 [2024-12-09 23:00:27.267998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:52.111 [2024-12-09 23:00:27.268153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:52.111 [2024-12-09 23:00:27.268162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:52.111 [2024-12-09 23:00:27.268306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.111 [2024-12-09 23:00:27.283711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58827 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 58827 ']' 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 58827 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58827 00:18:52.111 killing process with pid 58827 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58827' 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 58827 00:18:52.111 [2024-12-09 23:00:27.331918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.111 23:00:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 58827 00:18:52.111 [2024-12-09 23:00:27.331994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.112 [2024-12-09 23:00:27.332038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.112 [2024-12-09 23:00:27.332046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:18:53.067 [2024-12-09 23:00:28.226646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.638 23:00:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:18:53.638 00:18:53.638 real 0m3.314s 00:18:53.638 user 0m3.528s 00:18:53.638 sys 0m0.396s 00:18:53.638 23:00:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.638 23:00:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.638 ************************************ 00:18:53.638 END TEST raid0_resize_superblock_test 00:18:53.638 ************************************ 00:18:53.902 23:00:29 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:18:53.902 23:00:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.902 23:00:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.902 23:00:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.902 ************************************ 00:18:53.902 START TEST raid1_resize_superblock_test 00:18:53.902 ************************************ 00:18:53.902 Process raid pid: 58909 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58909 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58909' 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58909 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 58909 ']' 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.902 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.902 [2024-12-09 23:00:29.074811] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:53.902 [2024-12-09 23:00:29.074940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.902 [2024-12-09 23:00:29.236474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.159 [2024-12-09 23:00:29.340312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.159 [2024-12-09 23:00:29.481614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.159 [2024-12-09 23:00:29.481653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.724 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.724 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:54.724 23:00:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:54.724 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.724 23:00:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.981 malloc0 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.981 [2024-12-09 23:00:30.314210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:54.981 [2024-12-09 23:00:30.314412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.981 [2024-12-09 23:00:30.314460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:54.981 [2024-12-09 23:00:30.314566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.981 [2024-12-09 23:00:30.316771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.981 [2024-12-09 23:00:30.316903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:54.981 pt0 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.981 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 cbe10533-c0b2-4165-998c-baf1c14e959f 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 02f2000b-025f-40e5-ad4a-e9c33b77c4a4 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 c7b66ca9-edfd-4d8d-97d2-52b397c6a8f5 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 [2024-12-09 23:00:30.404416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 02f2000b-025f-40e5-ad4a-e9c33b77c4a4 is claimed 00:18:55.239 [2024-12-09 23:00:30.404499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c7b66ca9-edfd-4d8d-97d2-52b397c6a8f5 is claimed 00:18:55.239 [2024-12-09 23:00:30.404635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:55.239 [2024-12-09 23:00:30.404650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:18:55.239 [2024-12-09 23:00:30.404904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:55.239 [2024-12-09 23:00:30.405113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:55.239 [2024-12-09 23:00:30.405124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:55.239 [2024-12-09 23:00:30.405269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:18:55.239 [2024-12-09 23:00:30.484683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 [2024-12-09 23:00:30.520616] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:55.239 [2024-12-09 23:00:30.520640] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '02f2000b-025f-40e5-ad4a-e9c33b77c4a4' was resized: old size 131072, new size 204800 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.239 [2024-12-09 23:00:30.528558] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:18:55.239 [2024-12-09 23:00:30.528578] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c7b66ca9-edfd-4d8d-97d2-52b397c6a8f5' was resized: old size 131072, new size 204800 00:18:55.239 [2024-12-09 23:00:30.528606] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:18:55.239 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.240 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.498 [2024-12-09 23:00:30.608708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.498 [2024-12-09 23:00:30.636494] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:18:55.498 [2024-12-09 23:00:30.636568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:18:55.498 [2024-12-09 23:00:30.636590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:18:55.498 [2024-12-09 23:00:30.636734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.498 [2024-12-09 23:00:30.636907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.498 [2024-12-09 23:00:30.636976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.498 [2024-12-09 23:00:30.636989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.498 [2024-12-09 23:00:30.648446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:55.498 [2024-12-09 23:00:30.648498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.498 [2024-12-09 23:00:30.648516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:55.498 [2024-12-09 23:00:30.648529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.498 [2024-12-09 23:00:30.650680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.498 [2024-12-09 23:00:30.650716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:55.498 [2024-12-09 23:00:30.652311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 02f2000b-025f-40e5-ad4a-e9c33b77c4a4 00:18:55.498 [2024-12-09 23:00:30.652370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 02f2000b-025f-40e5-ad4a-e9c33b77c4a4 is claimed 00:18:55.498 [2024-12-09 23:00:30.652467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c7b66ca9-edfd-4d8d-97d2-52b397c6a8f5 00:18:55.498 [2024-12-09 23:00:30.652485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c7b66ca9-edfd-4d8d-97d2-52b397c6a8f5 is claimed 00:18:55.498 [2024-12-09 23:00:30.652596] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c7b66ca9-edfd-4d8d-97d2-52b397c6a8f5 (2) smaller than existing raid bdev Raid (3) 00:18:55.498 [2024-12-09 23:00:30.652616] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 02f2000b-025f-40e5-ad4a-e9c33b77c4a4: File exists 00:18:55.498 [2024-12-09 23:00:30.652657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:55.498 [2024-12-09 23:00:30.652668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:55.498 [2024-12-09 23:00:30.652901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:55.498 [2024-12-09 23:00:30.653062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:55.498 [2024-12-09 23:00:30.653071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:18:55.498 pt0 00:18:55.498 [2024-12-09 23:00:30.653261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.498 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.499 [2024-12-09 23:00:30.668911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58909 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 58909 ']' 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 58909 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58909 00:18:55.499 killing process with pid 58909 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58909' 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 58909 00:18:55.499 [2024-12-09 23:00:30.717043] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.499 [2024-12-09 23:00:30.717141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.499 23:00:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 58909 00:18:55.499 [2024-12-09 23:00:30.717193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.499 [2024-12-09 23:00:30.717213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:18:56.431 [2024-12-09 23:00:31.611356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.002 ************************************ 00:18:57.002 END TEST raid1_resize_superblock_test 00:18:57.002 ************************************ 00:18:57.002 23:00:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:18:57.002 00:18:57.002 real 0m3.326s 00:18:57.002 user 0m3.505s 00:18:57.002 sys 0m0.435s 00:18:57.002 23:00:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.002 23:00:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.261 23:00:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:18:57.261 23:00:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:18:57.261 23:00:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:18:57.261 23:00:32 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:18:57.261 23:00:32 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:18:57.261 23:00:32 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:18:57.261 23:00:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.261 23:00:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.261 23:00:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.261 ************************************ 00:18:57.261 START TEST raid_function_test_raid0 00:18:57.261 ************************************ 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:18:57.261 Process raid pid: 59001 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59001 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59001' 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59001 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 59001 ']' 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:57.261 23:00:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:57.261 [2024-12-09 23:00:32.452068] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:18:57.261 [2024-12-09 23:00:32.452520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.261 [2024-12-09 23:00:32.615455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.520 [2024-12-09 23:00:32.727614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.520 [2024-12-09 23:00:32.874316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.520 [2024-12-09 23:00:32.874357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:58.162 Base_1 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:58.162 Base_2 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:58.162 [2024-12-09 23:00:33.383254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:58.162 [2024-12-09 23:00:33.385086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:58.162 [2024-12-09 23:00:33.385160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:58.162 [2024-12-09 23:00:33.385173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:58.162 [2024-12-09 23:00:33.385427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:58.162 [2024-12-09 23:00:33.385552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:58.162 [2024-12-09 23:00:33.385561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:18:58.162 [2024-12-09 23:00:33.385692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.162 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:18:58.423 [2024-12-09 23:00:33.599336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:58.423 /dev/nbd0 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:58.423 1+0 records in 00:18:58.423 1+0 records out 00:18:58.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315386 s, 13.0 MB/s 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.423 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:58.682 { 00:18:58.682 "nbd_device": "/dev/nbd0", 00:18:58.682 "bdev_name": "raid" 00:18:58.682 } 00:18:58.682 ]' 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:58.682 { 00:18:58.682 "nbd_device": "/dev/nbd0", 00:18:58.682 "bdev_name": "raid" 00:18:58.682 } 00:18:58.682 ]' 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:18:58.682 4096+0 records in 00:18:58.682 4096+0 records out 00:18:58.682 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0215725 s, 97.2 MB/s 00:18:58.682 23:00:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:18:58.942 4096+0 records in 00:18:58.942 4096+0 records out 00:18:58.942 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.25414 s, 8.3 MB/s 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:18:58.942 128+0 records in 00:18:58.942 128+0 records out 00:18:58.942 65536 bytes (66 kB, 64 KiB) copied, 0.000925002 s, 70.8 MB/s 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:58.942 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:18:58.943 2035+0 records in 00:18:58.943 2035+0 records out 00:18:58.943 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00633755 s, 164 MB/s 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:18:58.943 456+0 records in 00:18:58.943 456+0 records out 00:18:58.943 233472 bytes (233 kB, 228 KiB) copied, 0.00233089 s, 100 MB/s 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.943 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:59.205 [2024-12-09 23:00:34.489907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.205 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59001 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 59001 ']' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 59001 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59001 00:18:59.467 killing process with pid 59001 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59001' 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 59001 00:18:59.467 [2024-12-09 23:00:34.737468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.467 23:00:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 59001 00:18:59.467 [2024-12-09 23:00:34.737556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.467 [2024-12-09 23:00:34.737601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.467 [2024-12-09 23:00:34.737616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:18:59.728 [2024-12-09 23:00:34.865562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.356 ************************************ 00:19:00.356 END TEST raid_function_test_raid0 00:19:00.356 ************************************ 00:19:00.356 23:00:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:19:00.356 00:19:00.356 real 0m3.248s 00:19:00.356 user 0m3.905s 00:19:00.356 sys 0m0.700s 00:19:00.356 23:00:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.356 23:00:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:00.356 23:00:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:19:00.356 23:00:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.356 23:00:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.356 23:00:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.356 ************************************ 00:19:00.356 START TEST raid_function_test_concat 00:19:00.356 ************************************ 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59119 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59119' 00:19:00.356 Process raid pid: 59119 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59119 00:19:00.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 59119 ']' 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.356 23:00:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:00.617 [2024-12-09 23:00:35.775942] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:00.617 [2024-12-09 23:00:35.776124] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.617 [2024-12-09 23:00:35.953616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.905 [2024-12-09 23:00:36.065557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.905 [2024-12-09 23:00:36.203470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.905 [2024-12-09 23:00:36.203618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:01.476 Base_1 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:01.476 Base_2 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.476 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:01.477 [2024-12-09 23:00:36.691345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:01.477 [2024-12-09 23:00:36.693223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:01.477 [2024-12-09 23:00:36.693417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:01.477 [2024-12-09 23:00:36.693435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:01.477 [2024-12-09 23:00:36.693712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:01.477 [2024-12-09 23:00:36.693845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:01.477 [2024-12-09 23:00:36.693854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:19:01.477 [2024-12-09 23:00:36.694000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.477 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:19:01.739 [2024-12-09 23:00:36.919417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:01.739 /dev/nbd0 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.739 1+0 records in 00:19:01.739 1+0 records out 00:19:01.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259924 s, 15.8 MB/s 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.739 23:00:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:02.001 { 00:19:02.001 "nbd_device": "/dev/nbd0", 00:19:02.001 "bdev_name": "raid" 00:19:02.001 } 00:19:02.001 ]' 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:02.001 { 00:19:02.001 "nbd_device": "/dev/nbd0", 00:19:02.001 "bdev_name": "raid" 00:19:02.001 } 00:19:02.001 ]' 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:19:02.001 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:19:02.002 4096+0 records in 00:19:02.002 4096+0 records out 00:19:02.002 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0236028 s, 88.9 MB/s 00:19:02.002 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:19:02.263 4096+0 records in 00:19:02.263 4096+0 records out 00:19:02.263 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.223891 s, 9.4 MB/s 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:19:02.263 128+0 records in 00:19:02.263 128+0 records out 00:19:02.263 65536 bytes (66 kB, 64 KiB) copied, 0.000818295 s, 80.1 MB/s 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:19:02.263 2035+0 records in 00:19:02.263 2035+0 records out 00:19:02.263 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00815849 s, 128 MB/s 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:19:02.263 456+0 records in 00:19:02.263 456+0 records out 00:19:02.263 233472 bytes (233 kB, 228 KiB) copied, 0.000905788 s, 258 MB/s 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.263 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.532 [2024-12-09 23:00:37.796468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:19:02.532 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.533 23:00:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:19:02.533 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.533 23:00:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59119 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 59119 ']' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 59119 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59119 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.796 killing process with pid 59119 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59119' 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 59119 00:19:02.796 [2024-12-09 23:00:38.111902] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.796 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 59119 00:19:02.796 [2024-12-09 23:00:38.111991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.796 [2024-12-09 23:00:38.112042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.796 [2024-12-09 23:00:38.112053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:19:03.078 [2024-12-09 23:00:38.243254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.650 23:00:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:19:03.650 00:19:03.650 real 0m3.269s 00:19:03.650 user 0m3.954s 00:19:03.650 sys 0m0.755s 00:19:03.650 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.650 ************************************ 00:19:03.650 END TEST raid_function_test_concat 00:19:03.650 ************************************ 00:19:03.650 23:00:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:03.650 23:00:38 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:19:03.650 23:00:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:03.650 23:00:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.650 23:00:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.650 ************************************ 00:19:03.650 START TEST raid0_resize_test 00:19:03.650 ************************************ 00:19:03.650 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:19:03.650 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:19:03.650 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:19:03.650 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:19:03.650 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59240 00:19:03.651 Process raid pid: 59240 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59240' 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59240 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 59240 ']' 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.651 23:00:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:03.912 [2024-12-09 23:00:39.061863] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:03.913 [2024-12-09 23:00:39.061982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.913 [2024-12-09 23:00:39.216286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.174 [2024-12-09 23:00:39.318475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.174 [2024-12-09 23:00:39.456541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.174 [2024-12-09 23:00:39.456594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 Base_1 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 Base_2 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 [2024-12-09 23:00:39.959826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:04.746 [2024-12-09 23:00:39.961630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:04.746 [2024-12-09 23:00:39.961680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:04.746 [2024-12-09 23:00:39.961692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:04.746 [2024-12-09 23:00:39.961940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:04.746 [2024-12-09 23:00:39.962042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:04.746 [2024-12-09 23:00:39.962050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:19:04.746 [2024-12-09 23:00:39.962195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 [2024-12-09 23:00:39.967802] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:04.746 [2024-12-09 23:00:39.967824] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:19:04.746 true 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 [2024-12-09 23:00:39.979987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.746 23:00:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 [2024-12-09 23:00:40.011804] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:04.746 [2024-12-09 23:00:40.011823] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:19:04.746 [2024-12-09 23:00:40.011851] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:19:04.746 true 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:19:04.746 [2024-12-09 23:00:40.024005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59240 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 59240 ']' 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 59240 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:19:04.746 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.747 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59240 00:19:04.747 killing process with pid 59240 00:19:04.747 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.747 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.747 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59240' 00:19:04.747 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 59240 00:19:04.747 [2024-12-09 23:00:40.071636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.747 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 59240 00:19:04.747 [2024-12-09 23:00:40.071707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.747 [2024-12-09 23:00:40.071755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.747 [2024-12-09 23:00:40.071764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:19:04.747 [2024-12-09 23:00:40.082928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.687 23:00:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:19:05.687 00:19:05.687 real 0m1.797s 00:19:05.687 user 0m1.968s 00:19:05.687 sys 0m0.252s 00:19:05.687 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.687 ************************************ 00:19:05.687 END TEST raid0_resize_test 00:19:05.687 ************************************ 00:19:05.687 23:00:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.687 23:00:40 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:19:05.687 23:00:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.687 23:00:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.687 23:00:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.687 ************************************ 00:19:05.687 START TEST raid1_resize_test 00:19:05.687 ************************************ 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:19:05.687 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:19:05.687 Process raid pid: 59291 00:19:05.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59291 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59291' 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59291 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 59291 ']' 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.688 23:00:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.688 [2024-12-09 23:00:40.903693] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:05.688 [2024-12-09 23:00:40.903988] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.960 [2024-12-09 23:00:41.063190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.960 [2024-12-09 23:00:41.166606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.960 [2024-12-09 23:00:41.306530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.960 [2024-12-09 23:00:41.306677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.540 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.540 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 Base_1 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 Base_2 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 [2024-12-09 23:00:41.767152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:06.541 [2024-12-09 23:00:41.768996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:06.541 [2024-12-09 23:00:41.769076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:06.541 [2024-12-09 23:00:41.769089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:06.541 [2024-12-09 23:00:41.769369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:06.541 [2024-12-09 23:00:41.769480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:06.541 [2024-12-09 23:00:41.769489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:19:06.541 [2024-12-09 23:00:41.769616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 [2024-12-09 23:00:41.775142] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:06.541 [2024-12-09 23:00:41.775167] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:19:06.541 true 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 [2024-12-09 23:00:41.787327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 [2024-12-09 23:00:41.819163] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:06.541 [2024-12-09 23:00:41.819265] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:19:06.541 [2024-12-09 23:00:41.819350] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:19:06.541 true 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.541 [2024-12-09 23:00:41.831358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59291 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 59291 ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 59291 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59291 00:19:06.541 killing process with pid 59291 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59291' 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 59291 00:19:06.541 [2024-12-09 23:00:41.880437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.541 23:00:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 59291 00:19:06.541 [2024-12-09 23:00:41.880514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.541 [2024-12-09 23:00:41.880965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.541 [2024-12-09 23:00:41.880983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:19:06.541 [2024-12-09 23:00:41.891660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.481 ************************************ 00:19:07.481 END TEST raid1_resize_test 00:19:07.481 ************************************ 00:19:07.481 23:00:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:19:07.481 00:19:07.481 real 0m1.780s 00:19:07.481 user 0m1.915s 00:19:07.481 sys 0m0.258s 00:19:07.481 23:00:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.481 23:00:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.482 23:00:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:19:07.482 23:00:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:07.482 23:00:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:19:07.482 23:00:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:07.482 23:00:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.482 23:00:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.482 ************************************ 00:19:07.482 START TEST raid_state_function_test 00:19:07.482 ************************************ 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:07.482 Process raid pid: 59348 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59348 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59348' 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59348 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 59348 ']' 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.482 23:00:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.482 [2024-12-09 23:00:42.749901] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:07.482 [2024-12-09 23:00:42.750205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.741 [2024-12-09 23:00:42.908795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.741 [2024-12-09 23:00:43.012192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.002 [2024-12-09 23:00:43.151056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.002 [2024-12-09 23:00:43.151092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.264 [2024-12-09 23:00:43.616831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.264 [2024-12-09 23:00:43.616890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.264 [2024-12-09 23:00:43.616901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.264 [2024-12-09 23:00:43.616910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.264 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.528 "name": "Existed_Raid", 00:19:08.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.528 "strip_size_kb": 64, 00:19:08.528 "state": "configuring", 00:19:08.528 "raid_level": "raid0", 00:19:08.528 "superblock": false, 00:19:08.528 "num_base_bdevs": 2, 00:19:08.528 "num_base_bdevs_discovered": 0, 00:19:08.528 "num_base_bdevs_operational": 2, 00:19:08.528 "base_bdevs_list": [ 00:19:08.528 { 00:19:08.528 "name": "BaseBdev1", 00:19:08.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.528 "is_configured": false, 00:19:08.528 "data_offset": 0, 00:19:08.528 "data_size": 0 00:19:08.528 }, 00:19:08.528 { 00:19:08.528 "name": "BaseBdev2", 00:19:08.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.528 "is_configured": false, 00:19:08.528 "data_offset": 0, 00:19:08.528 "data_size": 0 00:19:08.528 } 00:19:08.528 ] 00:19:08.528 }' 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.528 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.804 [2024-12-09 23:00:43.940853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.804 [2024-12-09 23:00:43.940884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.804 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 [2024-12-09 23:00:43.948840] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.805 [2024-12-09 23:00:43.948875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.805 [2024-12-09 23:00:43.948883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.805 [2024-12-09 23:00:43.948894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 [2024-12-09 23:00:43.981278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.805 BaseBdev1 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 [ 00:19:08.805 { 00:19:08.805 "name": "BaseBdev1", 00:19:08.805 "aliases": [ 00:19:08.805 "d3c14de0-8634-4568-80c7-d9d47e9d18e2" 00:19:08.805 ], 00:19:08.805 "product_name": "Malloc disk", 00:19:08.805 "block_size": 512, 00:19:08.805 "num_blocks": 65536, 00:19:08.805 "uuid": "d3c14de0-8634-4568-80c7-d9d47e9d18e2", 00:19:08.805 "assigned_rate_limits": { 00:19:08.805 "rw_ios_per_sec": 0, 00:19:08.805 "rw_mbytes_per_sec": 0, 00:19:08.805 "r_mbytes_per_sec": 0, 00:19:08.805 "w_mbytes_per_sec": 0 00:19:08.805 }, 00:19:08.805 "claimed": true, 00:19:08.805 "claim_type": "exclusive_write", 00:19:08.805 "zoned": false, 00:19:08.805 "supported_io_types": { 00:19:08.805 "read": true, 00:19:08.805 "write": true, 00:19:08.805 "unmap": true, 00:19:08.805 "flush": true, 00:19:08.805 "reset": true, 00:19:08.805 "nvme_admin": false, 00:19:08.805 "nvme_io": false, 00:19:08.805 "nvme_io_md": false, 00:19:08.805 "write_zeroes": true, 00:19:08.805 "zcopy": true, 00:19:08.805 "get_zone_info": false, 00:19:08.805 "zone_management": false, 00:19:08.805 "zone_append": false, 00:19:08.805 "compare": false, 00:19:08.805 "compare_and_write": false, 00:19:08.805 "abort": true, 00:19:08.805 "seek_hole": false, 00:19:08.805 "seek_data": false, 00:19:08.805 "copy": true, 00:19:08.805 "nvme_iov_md": false 00:19:08.805 }, 00:19:08.805 "memory_domains": [ 00:19:08.805 { 00:19:08.805 "dma_device_id": "system", 00:19:08.805 "dma_device_type": 1 00:19:08.805 }, 00:19:08.805 { 00:19:08.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.805 "dma_device_type": 2 00:19:08.805 } 00:19:08.805 ], 00:19:08.805 "driver_specific": {} 00:19:08.805 } 00:19:08.805 ] 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.805 23:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.805 "name": "Existed_Raid", 00:19:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.805 "strip_size_kb": 64, 00:19:08.805 "state": "configuring", 00:19:08.805 "raid_level": "raid0", 00:19:08.805 "superblock": false, 00:19:08.805 "num_base_bdevs": 2, 00:19:08.805 "num_base_bdevs_discovered": 1, 00:19:08.805 "num_base_bdevs_operational": 2, 00:19:08.805 "base_bdevs_list": [ 00:19:08.805 { 00:19:08.805 "name": "BaseBdev1", 00:19:08.805 "uuid": "d3c14de0-8634-4568-80c7-d9d47e9d18e2", 00:19:08.805 "is_configured": true, 00:19:08.805 "data_offset": 0, 00:19:08.805 "data_size": 65536 00:19:08.805 }, 00:19:08.805 { 00:19:08.805 "name": "BaseBdev2", 00:19:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.805 "is_configured": false, 00:19:08.805 "data_offset": 0, 00:19:08.805 "data_size": 0 00:19:08.805 } 00:19:08.805 ] 00:19:08.805 }' 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.805 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.100 [2024-12-09 23:00:44.337390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.100 [2024-12-09 23:00:44.337537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.100 [2024-12-09 23:00:44.345447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.100 [2024-12-09 23:00:44.347371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.100 [2024-12-09 23:00:44.347489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.100 "name": "Existed_Raid", 00:19:09.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.100 "strip_size_kb": 64, 00:19:09.100 "state": "configuring", 00:19:09.100 "raid_level": "raid0", 00:19:09.100 "superblock": false, 00:19:09.100 "num_base_bdevs": 2, 00:19:09.100 "num_base_bdevs_discovered": 1, 00:19:09.100 "num_base_bdevs_operational": 2, 00:19:09.100 "base_bdevs_list": [ 00:19:09.100 { 00:19:09.100 "name": "BaseBdev1", 00:19:09.100 "uuid": "d3c14de0-8634-4568-80c7-d9d47e9d18e2", 00:19:09.100 "is_configured": true, 00:19:09.100 "data_offset": 0, 00:19:09.100 "data_size": 65536 00:19:09.100 }, 00:19:09.100 { 00:19:09.100 "name": "BaseBdev2", 00:19:09.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.100 "is_configured": false, 00:19:09.100 "data_offset": 0, 00:19:09.100 "data_size": 0 00:19:09.100 } 00:19:09.100 ] 00:19:09.100 }' 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.100 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 [2024-12-09 23:00:44.676202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.361 [2024-12-09 23:00:44.676243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:09.361 [2024-12-09 23:00:44.676251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:09.361 [2024-12-09 23:00:44.676501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:09.361 [2024-12-09 23:00:44.676644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:09.361 [2024-12-09 23:00:44.676655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:09.361 [2024-12-09 23:00:44.676880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.361 BaseBdev2 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 [ 00:19:09.361 { 00:19:09.361 "name": "BaseBdev2", 00:19:09.361 "aliases": [ 00:19:09.361 "fa04f9e3-b062-4ed3-8a64-4aa3fb87876a" 00:19:09.361 ], 00:19:09.361 "product_name": "Malloc disk", 00:19:09.361 "block_size": 512, 00:19:09.361 "num_blocks": 65536, 00:19:09.361 "uuid": "fa04f9e3-b062-4ed3-8a64-4aa3fb87876a", 00:19:09.361 "assigned_rate_limits": { 00:19:09.361 "rw_ios_per_sec": 0, 00:19:09.361 "rw_mbytes_per_sec": 0, 00:19:09.361 "r_mbytes_per_sec": 0, 00:19:09.361 "w_mbytes_per_sec": 0 00:19:09.361 }, 00:19:09.361 "claimed": true, 00:19:09.361 "claim_type": "exclusive_write", 00:19:09.361 "zoned": false, 00:19:09.361 "supported_io_types": { 00:19:09.361 "read": true, 00:19:09.361 "write": true, 00:19:09.361 "unmap": true, 00:19:09.361 "flush": true, 00:19:09.361 "reset": true, 00:19:09.361 "nvme_admin": false, 00:19:09.361 "nvme_io": false, 00:19:09.361 "nvme_io_md": false, 00:19:09.361 "write_zeroes": true, 00:19:09.361 "zcopy": true, 00:19:09.361 "get_zone_info": false, 00:19:09.361 "zone_management": false, 00:19:09.361 "zone_append": false, 00:19:09.361 "compare": false, 00:19:09.361 "compare_and_write": false, 00:19:09.361 "abort": true, 00:19:09.361 "seek_hole": false, 00:19:09.361 "seek_data": false, 00:19:09.361 "copy": true, 00:19:09.361 "nvme_iov_md": false 00:19:09.361 }, 00:19:09.361 "memory_domains": [ 00:19:09.361 { 00:19:09.361 "dma_device_id": "system", 00:19:09.361 "dma_device_type": 1 00:19:09.361 }, 00:19:09.361 { 00:19:09.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.361 "dma_device_type": 2 00:19:09.361 } 00:19:09.361 ], 00:19:09.361 "driver_specific": {} 00:19:09.361 } 00:19:09.361 ] 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.623 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.623 "name": "Existed_Raid", 00:19:09.623 "uuid": "eea0b494-c9d9-49fb-855b-830d92df5247", 00:19:09.623 "strip_size_kb": 64, 00:19:09.623 "state": "online", 00:19:09.623 "raid_level": "raid0", 00:19:09.623 "superblock": false, 00:19:09.623 "num_base_bdevs": 2, 00:19:09.623 "num_base_bdevs_discovered": 2, 00:19:09.623 "num_base_bdevs_operational": 2, 00:19:09.623 "base_bdevs_list": [ 00:19:09.623 { 00:19:09.623 "name": "BaseBdev1", 00:19:09.623 "uuid": "d3c14de0-8634-4568-80c7-d9d47e9d18e2", 00:19:09.623 "is_configured": true, 00:19:09.623 "data_offset": 0, 00:19:09.623 "data_size": 65536 00:19:09.623 }, 00:19:09.623 { 00:19:09.623 "name": "BaseBdev2", 00:19:09.623 "uuid": "fa04f9e3-b062-4ed3-8a64-4aa3fb87876a", 00:19:09.623 "is_configured": true, 00:19:09.623 "data_offset": 0, 00:19:09.623 "data_size": 65536 00:19:09.623 } 00:19:09.623 ] 00:19:09.623 }' 00:19:09.623 23:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.623 23:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.882 [2024-12-09 23:00:45.012596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.882 "name": "Existed_Raid", 00:19:09.882 "aliases": [ 00:19:09.882 "eea0b494-c9d9-49fb-855b-830d92df5247" 00:19:09.882 ], 00:19:09.882 "product_name": "Raid Volume", 00:19:09.882 "block_size": 512, 00:19:09.882 "num_blocks": 131072, 00:19:09.882 "uuid": "eea0b494-c9d9-49fb-855b-830d92df5247", 00:19:09.882 "assigned_rate_limits": { 00:19:09.882 "rw_ios_per_sec": 0, 00:19:09.882 "rw_mbytes_per_sec": 0, 00:19:09.882 "r_mbytes_per_sec": 0, 00:19:09.882 "w_mbytes_per_sec": 0 00:19:09.882 }, 00:19:09.882 "claimed": false, 00:19:09.882 "zoned": false, 00:19:09.882 "supported_io_types": { 00:19:09.882 "read": true, 00:19:09.882 "write": true, 00:19:09.882 "unmap": true, 00:19:09.882 "flush": true, 00:19:09.882 "reset": true, 00:19:09.882 "nvme_admin": false, 00:19:09.882 "nvme_io": false, 00:19:09.882 "nvme_io_md": false, 00:19:09.882 "write_zeroes": true, 00:19:09.882 "zcopy": false, 00:19:09.882 "get_zone_info": false, 00:19:09.882 "zone_management": false, 00:19:09.882 "zone_append": false, 00:19:09.882 "compare": false, 00:19:09.882 "compare_and_write": false, 00:19:09.882 "abort": false, 00:19:09.882 "seek_hole": false, 00:19:09.882 "seek_data": false, 00:19:09.882 "copy": false, 00:19:09.882 "nvme_iov_md": false 00:19:09.882 }, 00:19:09.882 "memory_domains": [ 00:19:09.882 { 00:19:09.882 "dma_device_id": "system", 00:19:09.882 "dma_device_type": 1 00:19:09.882 }, 00:19:09.882 { 00:19:09.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.882 "dma_device_type": 2 00:19:09.882 }, 00:19:09.882 { 00:19:09.882 "dma_device_id": "system", 00:19:09.882 "dma_device_type": 1 00:19:09.882 }, 00:19:09.882 { 00:19:09.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.882 "dma_device_type": 2 00:19:09.882 } 00:19:09.882 ], 00:19:09.882 "driver_specific": { 00:19:09.882 "raid": { 00:19:09.882 "uuid": "eea0b494-c9d9-49fb-855b-830d92df5247", 00:19:09.882 "strip_size_kb": 64, 00:19:09.882 "state": "online", 00:19:09.882 "raid_level": "raid0", 00:19:09.882 "superblock": false, 00:19:09.882 "num_base_bdevs": 2, 00:19:09.882 "num_base_bdevs_discovered": 2, 00:19:09.882 "num_base_bdevs_operational": 2, 00:19:09.882 "base_bdevs_list": [ 00:19:09.882 { 00:19:09.882 "name": "BaseBdev1", 00:19:09.882 "uuid": "d3c14de0-8634-4568-80c7-d9d47e9d18e2", 00:19:09.882 "is_configured": true, 00:19:09.882 "data_offset": 0, 00:19:09.882 "data_size": 65536 00:19:09.882 }, 00:19:09.882 { 00:19:09.882 "name": "BaseBdev2", 00:19:09.882 "uuid": "fa04f9e3-b062-4ed3-8a64-4aa3fb87876a", 00:19:09.882 "is_configured": true, 00:19:09.882 "data_offset": 0, 00:19:09.882 "data_size": 65536 00:19:09.882 } 00:19:09.882 ] 00:19:09.882 } 00:19:09.882 } 00:19:09.882 }' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:09.882 BaseBdev2' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:09.882 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 [2024-12-09 23:00:45.176404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.883 [2024-12-09 23:00:45.176437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.883 [2024-12-09 23:00:45.176482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.143 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.143 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.143 "name": "Existed_Raid", 00:19:10.143 "uuid": "eea0b494-c9d9-49fb-855b-830d92df5247", 00:19:10.143 "strip_size_kb": 64, 00:19:10.143 "state": "offline", 00:19:10.143 "raid_level": "raid0", 00:19:10.143 "superblock": false, 00:19:10.143 "num_base_bdevs": 2, 00:19:10.143 "num_base_bdevs_discovered": 1, 00:19:10.143 "num_base_bdevs_operational": 1, 00:19:10.143 "base_bdevs_list": [ 00:19:10.143 { 00:19:10.143 "name": null, 00:19:10.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.143 "is_configured": false, 00:19:10.143 "data_offset": 0, 00:19:10.143 "data_size": 65536 00:19:10.143 }, 00:19:10.143 { 00:19:10.143 "name": "BaseBdev2", 00:19:10.143 "uuid": "fa04f9e3-b062-4ed3-8a64-4aa3fb87876a", 00:19:10.143 "is_configured": true, 00:19:10.143 "data_offset": 0, 00:19:10.143 "data_size": 65536 00:19:10.143 } 00:19:10.143 ] 00:19:10.143 }' 00:19:10.143 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.143 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.403 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.404 [2024-12-09 23:00:45.563946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.404 [2024-12-09 23:00:45.563992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59348 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 59348 ']' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 59348 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59348 00:19:10.404 killing process with pid 59348 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59348' 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 59348 00:19:10.404 [2024-12-09 23:00:45.683057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.404 23:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 59348 00:19:10.404 [2024-12-09 23:00:45.693504] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:11.347 00:19:11.347 real 0m3.712s 00:19:11.347 user 0m5.386s 00:19:11.347 sys 0m0.545s 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.347 ************************************ 00:19:11.347 END TEST raid_state_function_test 00:19:11.347 ************************************ 00:19:11.347 23:00:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:19:11.347 23:00:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:11.347 23:00:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.347 23:00:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.347 ************************************ 00:19:11.347 START TEST raid_state_function_test_sb 00:19:11.347 ************************************ 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:11.347 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:11.348 Process raid pid: 59590 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59590 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59590' 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59590 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 59590 ']' 00:19:11.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.348 23:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:11.348 [2024-12-09 23:00:46.509485] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:11.348 [2024-12-09 23:00:46.509600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.348 [2024-12-09 23:00:46.670865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.609 [2024-12-09 23:00:46.773405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.609 [2024-12-09 23:00:46.910874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.609 [2024-12-09 23:00:46.910913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.182 [2024-12-09 23:00:47.439191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.182 [2024-12-09 23:00:47.439245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.182 [2024-12-09 23:00:47.439257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.182 [2024-12-09 23:00:47.439267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.182 "name": "Existed_Raid", 00:19:12.182 "uuid": "b619edda-74f0-4182-acbc-b1f862bda5ee", 00:19:12.182 "strip_size_kb": 64, 00:19:12.182 "state": "configuring", 00:19:12.182 "raid_level": "raid0", 00:19:12.182 "superblock": true, 00:19:12.182 "num_base_bdevs": 2, 00:19:12.182 "num_base_bdevs_discovered": 0, 00:19:12.182 "num_base_bdevs_operational": 2, 00:19:12.182 "base_bdevs_list": [ 00:19:12.182 { 00:19:12.182 "name": "BaseBdev1", 00:19:12.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.182 "is_configured": false, 00:19:12.182 "data_offset": 0, 00:19:12.182 "data_size": 0 00:19:12.182 }, 00:19:12.182 { 00:19:12.182 "name": "BaseBdev2", 00:19:12.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.182 "is_configured": false, 00:19:12.182 "data_offset": 0, 00:19:12.182 "data_size": 0 00:19:12.182 } 00:19:12.182 ] 00:19:12.182 }' 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.182 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.447 [2024-12-09 23:00:47.771201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.447 [2024-12-09 23:00:47.771233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.447 [2024-12-09 23:00:47.779203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.447 [2024-12-09 23:00:47.779241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.447 [2024-12-09 23:00:47.779251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.447 [2024-12-09 23:00:47.779263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.447 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.707 [2024-12-09 23:00:47.812820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.707 BaseBdev1 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.707 [ 00:19:12.707 { 00:19:12.707 "name": "BaseBdev1", 00:19:12.707 "aliases": [ 00:19:12.707 "27532499-ee33-4fb2-8c23-62b4a1a069e3" 00:19:12.707 ], 00:19:12.707 "product_name": "Malloc disk", 00:19:12.707 "block_size": 512, 00:19:12.707 "num_blocks": 65536, 00:19:12.707 "uuid": "27532499-ee33-4fb2-8c23-62b4a1a069e3", 00:19:12.707 "assigned_rate_limits": { 00:19:12.707 "rw_ios_per_sec": 0, 00:19:12.707 "rw_mbytes_per_sec": 0, 00:19:12.707 "r_mbytes_per_sec": 0, 00:19:12.707 "w_mbytes_per_sec": 0 00:19:12.707 }, 00:19:12.707 "claimed": true, 00:19:12.707 "claim_type": "exclusive_write", 00:19:12.707 "zoned": false, 00:19:12.707 "supported_io_types": { 00:19:12.707 "read": true, 00:19:12.707 "write": true, 00:19:12.707 "unmap": true, 00:19:12.707 "flush": true, 00:19:12.707 "reset": true, 00:19:12.707 "nvme_admin": false, 00:19:12.707 "nvme_io": false, 00:19:12.707 "nvme_io_md": false, 00:19:12.707 "write_zeroes": true, 00:19:12.707 "zcopy": true, 00:19:12.707 "get_zone_info": false, 00:19:12.707 "zone_management": false, 00:19:12.707 "zone_append": false, 00:19:12.707 "compare": false, 00:19:12.707 "compare_and_write": false, 00:19:12.707 "abort": true, 00:19:12.707 "seek_hole": false, 00:19:12.707 "seek_data": false, 00:19:12.707 "copy": true, 00:19:12.707 "nvme_iov_md": false 00:19:12.707 }, 00:19:12.707 "memory_domains": [ 00:19:12.707 { 00:19:12.707 "dma_device_id": "system", 00:19:12.707 "dma_device_type": 1 00:19:12.707 }, 00:19:12.707 { 00:19:12.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.707 "dma_device_type": 2 00:19:12.707 } 00:19:12.707 ], 00:19:12.707 "driver_specific": {} 00:19:12.707 } 00:19:12.707 ] 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.707 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.707 "name": "Existed_Raid", 00:19:12.707 "uuid": "c83c0a4b-6efd-4994-8f54-574e4c6f546a", 00:19:12.707 "strip_size_kb": 64, 00:19:12.707 "state": "configuring", 00:19:12.707 "raid_level": "raid0", 00:19:12.707 "superblock": true, 00:19:12.707 "num_base_bdevs": 2, 00:19:12.707 "num_base_bdevs_discovered": 1, 00:19:12.707 "num_base_bdevs_operational": 2, 00:19:12.707 "base_bdevs_list": [ 00:19:12.707 { 00:19:12.707 "name": "BaseBdev1", 00:19:12.707 "uuid": "27532499-ee33-4fb2-8c23-62b4a1a069e3", 00:19:12.707 "is_configured": true, 00:19:12.707 "data_offset": 2048, 00:19:12.708 "data_size": 63488 00:19:12.708 }, 00:19:12.708 { 00:19:12.708 "name": "BaseBdev2", 00:19:12.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.708 "is_configured": false, 00:19:12.708 "data_offset": 0, 00:19:12.708 "data_size": 0 00:19:12.708 } 00:19:12.708 ] 00:19:12.708 }' 00:19:12.708 23:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.708 23:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.970 [2024-12-09 23:00:48.188946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.970 [2024-12-09 23:00:48.189132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.970 [2024-12-09 23:00:48.197023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.970 [2024-12-09 23:00:48.199063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.970 [2024-12-09 23:00:48.199127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.970 "name": "Existed_Raid", 00:19:12.970 "uuid": "23bb4b43-331d-44aa-89f1-520ebca42a77", 00:19:12.970 "strip_size_kb": 64, 00:19:12.970 "state": "configuring", 00:19:12.970 "raid_level": "raid0", 00:19:12.970 "superblock": true, 00:19:12.970 "num_base_bdevs": 2, 00:19:12.970 "num_base_bdevs_discovered": 1, 00:19:12.970 "num_base_bdevs_operational": 2, 00:19:12.970 "base_bdevs_list": [ 00:19:12.970 { 00:19:12.970 "name": "BaseBdev1", 00:19:12.970 "uuid": "27532499-ee33-4fb2-8c23-62b4a1a069e3", 00:19:12.970 "is_configured": true, 00:19:12.970 "data_offset": 2048, 00:19:12.970 "data_size": 63488 00:19:12.970 }, 00:19:12.970 { 00:19:12.970 "name": "BaseBdev2", 00:19:12.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.970 "is_configured": false, 00:19:12.970 "data_offset": 0, 00:19:12.970 "data_size": 0 00:19:12.970 } 00:19:12.970 ] 00:19:12.970 }' 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.970 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.232 [2024-12-09 23:00:48.567418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.232 BaseBdev2 00:19:13.232 [2024-12-09 23:00:48.567930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:13.232 [2024-12-09 23:00:48.567955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:13.232 [2024-12-09 23:00:48.568293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:13.232 [2024-12-09 23:00:48.568458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:13.232 [2024-12-09 23:00:48.568472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:13.232 [2024-12-09 23:00:48.568615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.232 [ 00:19:13.232 { 00:19:13.232 "name": "BaseBdev2", 00:19:13.232 "aliases": [ 00:19:13.232 "450b4f8f-f1dc-4605-8059-34515f299362" 00:19:13.232 ], 00:19:13.232 "product_name": "Malloc disk", 00:19:13.232 "block_size": 512, 00:19:13.232 "num_blocks": 65536, 00:19:13.232 "uuid": "450b4f8f-f1dc-4605-8059-34515f299362", 00:19:13.232 "assigned_rate_limits": { 00:19:13.232 "rw_ios_per_sec": 0, 00:19:13.232 "rw_mbytes_per_sec": 0, 00:19:13.232 "r_mbytes_per_sec": 0, 00:19:13.232 "w_mbytes_per_sec": 0 00:19:13.232 }, 00:19:13.232 "claimed": true, 00:19:13.232 "claim_type": "exclusive_write", 00:19:13.232 "zoned": false, 00:19:13.232 "supported_io_types": { 00:19:13.232 "read": true, 00:19:13.232 "write": true, 00:19:13.232 "unmap": true, 00:19:13.232 "flush": true, 00:19:13.232 "reset": true, 00:19:13.232 "nvme_admin": false, 00:19:13.232 "nvme_io": false, 00:19:13.232 "nvme_io_md": false, 00:19:13.232 "write_zeroes": true, 00:19:13.232 "zcopy": true, 00:19:13.232 "get_zone_info": false, 00:19:13.232 "zone_management": false, 00:19:13.232 "zone_append": false, 00:19:13.232 "compare": false, 00:19:13.232 "compare_and_write": false, 00:19:13.232 "abort": true, 00:19:13.232 "seek_hole": false, 00:19:13.232 "seek_data": false, 00:19:13.232 "copy": true, 00:19:13.232 "nvme_iov_md": false 00:19:13.232 }, 00:19:13.232 "memory_domains": [ 00:19:13.232 { 00:19:13.232 "dma_device_id": "system", 00:19:13.232 "dma_device_type": 1 00:19:13.232 }, 00:19:13.232 { 00:19:13.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.232 "dma_device_type": 2 00:19:13.232 } 00:19:13.232 ], 00:19:13.232 "driver_specific": {} 00:19:13.232 } 00:19:13.232 ] 00:19:13.232 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.494 "name": "Existed_Raid", 00:19:13.494 "uuid": "23bb4b43-331d-44aa-89f1-520ebca42a77", 00:19:13.494 "strip_size_kb": 64, 00:19:13.494 "state": "online", 00:19:13.494 "raid_level": "raid0", 00:19:13.494 "superblock": true, 00:19:13.494 "num_base_bdevs": 2, 00:19:13.494 "num_base_bdevs_discovered": 2, 00:19:13.494 "num_base_bdevs_operational": 2, 00:19:13.494 "base_bdevs_list": [ 00:19:13.494 { 00:19:13.494 "name": "BaseBdev1", 00:19:13.494 "uuid": "27532499-ee33-4fb2-8c23-62b4a1a069e3", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 2048, 00:19:13.494 "data_size": 63488 00:19:13.494 }, 00:19:13.494 { 00:19:13.494 "name": "BaseBdev2", 00:19:13.494 "uuid": "450b4f8f-f1dc-4605-8059-34515f299362", 00:19:13.494 "is_configured": true, 00:19:13.494 "data_offset": 2048, 00:19:13.494 "data_size": 63488 00:19:13.494 } 00:19:13.494 ] 00:19:13.494 }' 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.494 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.755 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:13.755 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:13.755 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:13.755 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:13.755 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.756 [2024-12-09 23:00:48.923822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:13.756 "name": "Existed_Raid", 00:19:13.756 "aliases": [ 00:19:13.756 "23bb4b43-331d-44aa-89f1-520ebca42a77" 00:19:13.756 ], 00:19:13.756 "product_name": "Raid Volume", 00:19:13.756 "block_size": 512, 00:19:13.756 "num_blocks": 126976, 00:19:13.756 "uuid": "23bb4b43-331d-44aa-89f1-520ebca42a77", 00:19:13.756 "assigned_rate_limits": { 00:19:13.756 "rw_ios_per_sec": 0, 00:19:13.756 "rw_mbytes_per_sec": 0, 00:19:13.756 "r_mbytes_per_sec": 0, 00:19:13.756 "w_mbytes_per_sec": 0 00:19:13.756 }, 00:19:13.756 "claimed": false, 00:19:13.756 "zoned": false, 00:19:13.756 "supported_io_types": { 00:19:13.756 "read": true, 00:19:13.756 "write": true, 00:19:13.756 "unmap": true, 00:19:13.756 "flush": true, 00:19:13.756 "reset": true, 00:19:13.756 "nvme_admin": false, 00:19:13.756 "nvme_io": false, 00:19:13.756 "nvme_io_md": false, 00:19:13.756 "write_zeroes": true, 00:19:13.756 "zcopy": false, 00:19:13.756 "get_zone_info": false, 00:19:13.756 "zone_management": false, 00:19:13.756 "zone_append": false, 00:19:13.756 "compare": false, 00:19:13.756 "compare_and_write": false, 00:19:13.756 "abort": false, 00:19:13.756 "seek_hole": false, 00:19:13.756 "seek_data": false, 00:19:13.756 "copy": false, 00:19:13.756 "nvme_iov_md": false 00:19:13.756 }, 00:19:13.756 "memory_domains": [ 00:19:13.756 { 00:19:13.756 "dma_device_id": "system", 00:19:13.756 "dma_device_type": 1 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.756 "dma_device_type": 2 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "dma_device_id": "system", 00:19:13.756 "dma_device_type": 1 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.756 "dma_device_type": 2 00:19:13.756 } 00:19:13.756 ], 00:19:13.756 "driver_specific": { 00:19:13.756 "raid": { 00:19:13.756 "uuid": "23bb4b43-331d-44aa-89f1-520ebca42a77", 00:19:13.756 "strip_size_kb": 64, 00:19:13.756 "state": "online", 00:19:13.756 "raid_level": "raid0", 00:19:13.756 "superblock": true, 00:19:13.756 "num_base_bdevs": 2, 00:19:13.756 "num_base_bdevs_discovered": 2, 00:19:13.756 "num_base_bdevs_operational": 2, 00:19:13.756 "base_bdevs_list": [ 00:19:13.756 { 00:19:13.756 "name": "BaseBdev1", 00:19:13.756 "uuid": "27532499-ee33-4fb2-8c23-62b4a1a069e3", 00:19:13.756 "is_configured": true, 00:19:13.756 "data_offset": 2048, 00:19:13.756 "data_size": 63488 00:19:13.756 }, 00:19:13.756 { 00:19:13.756 "name": "BaseBdev2", 00:19:13.756 "uuid": "450b4f8f-f1dc-4605-8059-34515f299362", 00:19:13.756 "is_configured": true, 00:19:13.756 "data_offset": 2048, 00:19:13.756 "data_size": 63488 00:19:13.756 } 00:19:13.756 ] 00:19:13.756 } 00:19:13.756 } 00:19:13.756 }' 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:13.756 BaseBdev2' 00:19:13.756 23:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.756 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.756 [2024-12-09 23:00:49.087596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.756 [2024-12-09 23:00:49.087625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.756 [2024-12-09 23:00:49.087670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.015 "name": "Existed_Raid", 00:19:14.015 "uuid": "23bb4b43-331d-44aa-89f1-520ebca42a77", 00:19:14.015 "strip_size_kb": 64, 00:19:14.015 "state": "offline", 00:19:14.015 "raid_level": "raid0", 00:19:14.015 "superblock": true, 00:19:14.015 "num_base_bdevs": 2, 00:19:14.015 "num_base_bdevs_discovered": 1, 00:19:14.015 "num_base_bdevs_operational": 1, 00:19:14.015 "base_bdevs_list": [ 00:19:14.015 { 00:19:14.015 "name": null, 00:19:14.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.015 "is_configured": false, 00:19:14.015 "data_offset": 0, 00:19:14.015 "data_size": 63488 00:19:14.015 }, 00:19:14.015 { 00:19:14.015 "name": "BaseBdev2", 00:19:14.015 "uuid": "450b4f8f-f1dc-4605-8059-34515f299362", 00:19:14.015 "is_configured": true, 00:19:14.015 "data_offset": 2048, 00:19:14.015 "data_size": 63488 00:19:14.015 } 00:19:14.015 ] 00:19:14.015 }' 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.015 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.276 [2024-12-09 23:00:49.490966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:14.276 [2024-12-09 23:00:49.491114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59590 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 59590 ']' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 59590 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59590 00:19:14.276 killing process with pid 59590 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59590' 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 59590 00:19:14.276 [2024-12-09 23:00:49.618429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.276 23:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 59590 00:19:14.276 [2024-12-09 23:00:49.629274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:15.216 23:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:15.216 00:19:15.216 real 0m3.955s 00:19:15.216 user 0m5.738s 00:19:15.216 sys 0m0.590s 00:19:15.216 23:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.216 23:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.216 ************************************ 00:19:15.216 END TEST raid_state_function_test_sb 00:19:15.216 ************************************ 00:19:15.216 23:00:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:19:15.216 23:00:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:15.216 23:00:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.216 23:00:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.216 ************************************ 00:19:15.216 START TEST raid_superblock_test 00:19:15.216 ************************************ 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=59830 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 59830 00:19:15.216 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59830 ']' 00:19:15.217 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.217 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.217 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.217 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.217 23:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.217 [2024-12-09 23:00:50.536704] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:15.217 [2024-12-09 23:00:50.536869] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:19:15.477 [2024-12-09 23:00:50.702957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.477 [2024-12-09 23:00:50.809673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.738 [2024-12-09 23:00:50.944667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.738 [2024-12-09 23:00:50.944724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.310 malloc1 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.310 [2024-12-09 23:00:51.421998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:16.310 [2024-12-09 23:00:51.422058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.310 [2024-12-09 23:00:51.422081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:16.310 [2024-12-09 23:00:51.422090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.310 [2024-12-09 23:00:51.424274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.310 [2024-12-09 23:00:51.424310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:16.310 pt1 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:16.310 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.311 malloc2 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.311 [2024-12-09 23:00:51.466712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:16.311 [2024-12-09 23:00:51.466764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.311 [2024-12-09 23:00:51.466787] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:16.311 [2024-12-09 23:00:51.466796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.311 [2024-12-09 23:00:51.468916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.311 [2024-12-09 23:00:51.468953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:16.311 pt2 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.311 [2024-12-09 23:00:51.474766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:16.311 [2024-12-09 23:00:51.476598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.311 [2024-12-09 23:00:51.476752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:16.311 [2024-12-09 23:00:51.476763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:16.311 [2024-12-09 23:00:51.477035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:16.311 [2024-12-09 23:00:51.477201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:16.311 [2024-12-09 23:00:51.477214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:16.311 [2024-12-09 23:00:51.477353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.311 "name": "raid_bdev1", 00:19:16.311 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:16.311 "strip_size_kb": 64, 00:19:16.311 "state": "online", 00:19:16.311 "raid_level": "raid0", 00:19:16.311 "superblock": true, 00:19:16.311 "num_base_bdevs": 2, 00:19:16.311 "num_base_bdevs_discovered": 2, 00:19:16.311 "num_base_bdevs_operational": 2, 00:19:16.311 "base_bdevs_list": [ 00:19:16.311 { 00:19:16.311 "name": "pt1", 00:19:16.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:16.311 "is_configured": true, 00:19:16.311 "data_offset": 2048, 00:19:16.311 "data_size": 63488 00:19:16.311 }, 00:19:16.311 { 00:19:16.311 "name": "pt2", 00:19:16.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:16.311 "is_configured": true, 00:19:16.311 "data_offset": 2048, 00:19:16.311 "data_size": 63488 00:19:16.311 } 00:19:16.311 ] 00:19:16.311 }' 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.311 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.574 [2024-12-09 23:00:51.811122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.574 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.574 "name": "raid_bdev1", 00:19:16.574 "aliases": [ 00:19:16.574 "6d4989cf-c320-43f8-9a54-21ded45b5602" 00:19:16.574 ], 00:19:16.574 "product_name": "Raid Volume", 00:19:16.574 "block_size": 512, 00:19:16.574 "num_blocks": 126976, 00:19:16.574 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:16.574 "assigned_rate_limits": { 00:19:16.574 "rw_ios_per_sec": 0, 00:19:16.574 "rw_mbytes_per_sec": 0, 00:19:16.574 "r_mbytes_per_sec": 0, 00:19:16.574 "w_mbytes_per_sec": 0 00:19:16.574 }, 00:19:16.574 "claimed": false, 00:19:16.574 "zoned": false, 00:19:16.574 "supported_io_types": { 00:19:16.574 "read": true, 00:19:16.574 "write": true, 00:19:16.574 "unmap": true, 00:19:16.574 "flush": true, 00:19:16.574 "reset": true, 00:19:16.574 "nvme_admin": false, 00:19:16.574 "nvme_io": false, 00:19:16.575 "nvme_io_md": false, 00:19:16.575 "write_zeroes": true, 00:19:16.575 "zcopy": false, 00:19:16.575 "get_zone_info": false, 00:19:16.575 "zone_management": false, 00:19:16.575 "zone_append": false, 00:19:16.575 "compare": false, 00:19:16.575 "compare_and_write": false, 00:19:16.575 "abort": false, 00:19:16.575 "seek_hole": false, 00:19:16.575 "seek_data": false, 00:19:16.575 "copy": false, 00:19:16.575 "nvme_iov_md": false 00:19:16.575 }, 00:19:16.575 "memory_domains": [ 00:19:16.575 { 00:19:16.575 "dma_device_id": "system", 00:19:16.575 "dma_device_type": 1 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.575 "dma_device_type": 2 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "dma_device_id": "system", 00:19:16.575 "dma_device_type": 1 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.575 "dma_device_type": 2 00:19:16.575 } 00:19:16.575 ], 00:19:16.575 "driver_specific": { 00:19:16.575 "raid": { 00:19:16.575 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:16.575 "strip_size_kb": 64, 00:19:16.575 "state": "online", 00:19:16.575 "raid_level": "raid0", 00:19:16.575 "superblock": true, 00:19:16.575 "num_base_bdevs": 2, 00:19:16.575 "num_base_bdevs_discovered": 2, 00:19:16.575 "num_base_bdevs_operational": 2, 00:19:16.575 "base_bdevs_list": [ 00:19:16.575 { 00:19:16.575 "name": "pt1", 00:19:16.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:16.575 "is_configured": true, 00:19:16.575 "data_offset": 2048, 00:19:16.575 "data_size": 63488 00:19:16.575 }, 00:19:16.575 { 00:19:16.575 "name": "pt2", 00:19:16.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:16.575 "is_configured": true, 00:19:16.575 "data_offset": 2048, 00:19:16.575 "data_size": 63488 00:19:16.575 } 00:19:16.575 ] 00:19:16.575 } 00:19:16.575 } 00:19:16.575 }' 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:16.575 pt2' 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 [2024-12-09 23:00:52.003150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6d4989cf-c320-43f8-9a54-21ded45b5602 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6d4989cf-c320-43f8-9a54-21ded45b5602 ']' 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 [2024-12-09 23:00:52.034843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.835 [2024-12-09 23:00:52.034869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.835 [2024-12-09 23:00:52.034946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.835 [2024-12-09 23:00:52.034990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.835 [2024-12-09 23:00:52.035002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:16.835 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.836 [2024-12-09 23:00:52.162905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:16.836 [2024-12-09 23:00:52.164823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:16.836 [2024-12-09 23:00:52.164895] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:16.836 [2024-12-09 23:00:52.164942] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:16.836 [2024-12-09 23:00:52.164957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:16.836 [2024-12-09 23:00:52.164971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:16.836 request: 00:19:16.836 { 00:19:16.836 "name": "raid_bdev1", 00:19:16.836 "raid_level": "raid0", 00:19:16.836 "base_bdevs": [ 00:19:16.836 "malloc1", 00:19:16.836 "malloc2" 00:19:16.836 ], 00:19:16.836 "strip_size_kb": 64, 00:19:16.836 "superblock": false, 00:19:16.836 "method": "bdev_raid_create", 00:19:16.836 "req_id": 1 00:19:16.836 } 00:19:16.836 Got JSON-RPC error response 00:19:16.836 response: 00:19:16.836 { 00:19:16.836 "code": -17, 00:19:16.836 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:16.836 } 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.836 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.096 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:17.096 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:17.096 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:17.096 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.097 [2024-12-09 23:00:52.206913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:17.097 [2024-12-09 23:00:52.206972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.097 [2024-12-09 23:00:52.206988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:17.097 [2024-12-09 23:00:52.207000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.097 [2024-12-09 23:00:52.209289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.097 [2024-12-09 23:00:52.209327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:17.097 [2024-12-09 23:00:52.209413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:17.097 [2024-12-09 23:00:52.209464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:17.097 pt1 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.097 "name": "raid_bdev1", 00:19:17.097 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:17.097 "strip_size_kb": 64, 00:19:17.097 "state": "configuring", 00:19:17.097 "raid_level": "raid0", 00:19:17.097 "superblock": true, 00:19:17.097 "num_base_bdevs": 2, 00:19:17.097 "num_base_bdevs_discovered": 1, 00:19:17.097 "num_base_bdevs_operational": 2, 00:19:17.097 "base_bdevs_list": [ 00:19:17.097 { 00:19:17.097 "name": "pt1", 00:19:17.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.097 "is_configured": true, 00:19:17.097 "data_offset": 2048, 00:19:17.097 "data_size": 63488 00:19:17.097 }, 00:19:17.097 { 00:19:17.097 "name": null, 00:19:17.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.097 "is_configured": false, 00:19:17.097 "data_offset": 2048, 00:19:17.097 "data_size": 63488 00:19:17.097 } 00:19:17.097 ] 00:19:17.097 }' 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.097 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.358 [2024-12-09 23:00:52.530998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:17.358 [2024-12-09 23:00:52.531059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.358 [2024-12-09 23:00:52.531077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:17.358 [2024-12-09 23:00:52.531088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.358 [2024-12-09 23:00:52.531519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.358 [2024-12-09 23:00:52.531544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:17.358 [2024-12-09 23:00:52.531615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:17.358 [2024-12-09 23:00:52.531641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:17.358 [2024-12-09 23:00:52.531748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:17.358 [2024-12-09 23:00:52.531759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:17.358 [2024-12-09 23:00:52.532001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:17.358 [2024-12-09 23:00:52.532152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:17.358 [2024-12-09 23:00:52.532161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:17.358 [2024-12-09 23:00:52.532287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.358 pt2 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.358 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.358 "name": "raid_bdev1", 00:19:17.358 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:17.358 "strip_size_kb": 64, 00:19:17.358 "state": "online", 00:19:17.358 "raid_level": "raid0", 00:19:17.358 "superblock": true, 00:19:17.358 "num_base_bdevs": 2, 00:19:17.358 "num_base_bdevs_discovered": 2, 00:19:17.358 "num_base_bdevs_operational": 2, 00:19:17.358 "base_bdevs_list": [ 00:19:17.358 { 00:19:17.358 "name": "pt1", 00:19:17.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.358 "is_configured": true, 00:19:17.358 "data_offset": 2048, 00:19:17.358 "data_size": 63488 00:19:17.358 }, 00:19:17.358 { 00:19:17.358 "name": "pt2", 00:19:17.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.359 "is_configured": true, 00:19:17.359 "data_offset": 2048, 00:19:17.359 "data_size": 63488 00:19:17.359 } 00:19:17.359 ] 00:19:17.359 }' 00:19:17.359 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.359 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.620 [2024-12-09 23:00:52.867388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.620 "name": "raid_bdev1", 00:19:17.620 "aliases": [ 00:19:17.620 "6d4989cf-c320-43f8-9a54-21ded45b5602" 00:19:17.620 ], 00:19:17.620 "product_name": "Raid Volume", 00:19:17.620 "block_size": 512, 00:19:17.620 "num_blocks": 126976, 00:19:17.620 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:17.620 "assigned_rate_limits": { 00:19:17.620 "rw_ios_per_sec": 0, 00:19:17.620 "rw_mbytes_per_sec": 0, 00:19:17.620 "r_mbytes_per_sec": 0, 00:19:17.620 "w_mbytes_per_sec": 0 00:19:17.620 }, 00:19:17.620 "claimed": false, 00:19:17.620 "zoned": false, 00:19:17.620 "supported_io_types": { 00:19:17.620 "read": true, 00:19:17.620 "write": true, 00:19:17.620 "unmap": true, 00:19:17.620 "flush": true, 00:19:17.620 "reset": true, 00:19:17.620 "nvme_admin": false, 00:19:17.620 "nvme_io": false, 00:19:17.620 "nvme_io_md": false, 00:19:17.620 "write_zeroes": true, 00:19:17.620 "zcopy": false, 00:19:17.620 "get_zone_info": false, 00:19:17.620 "zone_management": false, 00:19:17.620 "zone_append": false, 00:19:17.620 "compare": false, 00:19:17.620 "compare_and_write": false, 00:19:17.620 "abort": false, 00:19:17.620 "seek_hole": false, 00:19:17.620 "seek_data": false, 00:19:17.620 "copy": false, 00:19:17.620 "nvme_iov_md": false 00:19:17.620 }, 00:19:17.620 "memory_domains": [ 00:19:17.620 { 00:19:17.620 "dma_device_id": "system", 00:19:17.620 "dma_device_type": 1 00:19:17.620 }, 00:19:17.620 { 00:19:17.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.620 "dma_device_type": 2 00:19:17.620 }, 00:19:17.620 { 00:19:17.620 "dma_device_id": "system", 00:19:17.620 "dma_device_type": 1 00:19:17.620 }, 00:19:17.620 { 00:19:17.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.620 "dma_device_type": 2 00:19:17.620 } 00:19:17.620 ], 00:19:17.620 "driver_specific": { 00:19:17.620 "raid": { 00:19:17.620 "uuid": "6d4989cf-c320-43f8-9a54-21ded45b5602", 00:19:17.620 "strip_size_kb": 64, 00:19:17.620 "state": "online", 00:19:17.620 "raid_level": "raid0", 00:19:17.620 "superblock": true, 00:19:17.620 "num_base_bdevs": 2, 00:19:17.620 "num_base_bdevs_discovered": 2, 00:19:17.620 "num_base_bdevs_operational": 2, 00:19:17.620 "base_bdevs_list": [ 00:19:17.620 { 00:19:17.620 "name": "pt1", 00:19:17.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.620 "is_configured": true, 00:19:17.620 "data_offset": 2048, 00:19:17.620 "data_size": 63488 00:19:17.620 }, 00:19:17.620 { 00:19:17.620 "name": "pt2", 00:19:17.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.620 "is_configured": true, 00:19:17.620 "data_offset": 2048, 00:19:17.620 "data_size": 63488 00:19:17.620 } 00:19:17.620 ] 00:19:17.620 } 00:19:17.620 } 00:19:17.620 }' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:17.620 pt2' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.620 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.882 23:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 [2024-12-09 23:00:53.039417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6d4989cf-c320-43f8-9a54-21ded45b5602 '!=' 6d4989cf-c320-43f8-9a54-21ded45b5602 ']' 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 59830 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59830 ']' 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59830 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59830 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.882 killing process with pid 59830 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59830' 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 59830 00:19:17.882 [2024-12-09 23:00:53.097615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.882 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 59830 00:19:17.882 [2024-12-09 23:00:53.097715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.882 [2024-12-09 23:00:53.097769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.882 [2024-12-09 23:00:53.097782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:17.882 [2024-12-09 23:00:53.232779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.824 23:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:18.824 00:19:18.824 real 0m3.527s 00:19:18.824 user 0m4.952s 00:19:18.824 sys 0m0.564s 00:19:18.824 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.824 ************************************ 00:19:18.824 END TEST raid_superblock_test 00:19:18.824 ************************************ 00:19:18.824 23:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.824 23:00:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:19:18.824 23:00:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:18.824 23:00:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.824 23:00:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.824 ************************************ 00:19:18.824 START TEST raid_read_error_test 00:19:18.824 ************************************ 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y2RIZc2ym7 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60026 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60026 00:19:18.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 60026 ']' 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.824 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.825 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.825 23:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.825 [2024-12-09 23:00:54.136145] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:18.825 [2024-12-09 23:00:54.136286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60026 ] 00:19:19.084 [2024-12-09 23:00:54.295537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.084 [2024-12-09 23:00:54.414652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.347 [2024-12-09 23:00:54.562548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.347 [2024-12-09 23:00:54.562584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 BaseBdev1_malloc 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 true 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 [2024-12-09 23:00:55.056089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:19.923 [2024-12-09 23:00:55.056173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.923 [2024-12-09 23:00:55.056193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:19.923 [2024-12-09 23:00:55.056206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.923 [2024-12-09 23:00:55.058493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.923 [2024-12-09 23:00:55.058536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:19.923 BaseBdev1 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 BaseBdev2_malloc 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 true 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 [2024-12-09 23:00:55.105634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:19.923 [2024-12-09 23:00:55.105687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.923 [2024-12-09 23:00:55.105703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:19.923 [2024-12-09 23:00:55.105713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.923 [2024-12-09 23:00:55.107911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.923 [2024-12-09 23:00:55.107950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:19.923 BaseBdev2 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 [2024-12-09 23:00:55.113697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:19.923 [2024-12-09 23:00:55.115608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.923 [2024-12-09 23:00:55.115808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:19.923 [2024-12-09 23:00:55.115824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:19.923 [2024-12-09 23:00:55.116077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:19.923 [2024-12-09 23:00:55.116267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:19.923 [2024-12-09 23:00:55.116279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:19.923 [2024-12-09 23:00:55.116424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.923 "name": "raid_bdev1", 00:19:19.923 "uuid": "31ba491d-7f04-49e4-98a3-e4b81288ebe8", 00:19:19.923 "strip_size_kb": 64, 00:19:19.923 "state": "online", 00:19:19.923 "raid_level": "raid0", 00:19:19.923 "superblock": true, 00:19:19.923 "num_base_bdevs": 2, 00:19:19.923 "num_base_bdevs_discovered": 2, 00:19:19.923 "num_base_bdevs_operational": 2, 00:19:19.923 "base_bdevs_list": [ 00:19:19.923 { 00:19:19.923 "name": "BaseBdev1", 00:19:19.923 "uuid": "dc24c806-a78f-5ef6-bfa6-179e21f8f790", 00:19:19.923 "is_configured": true, 00:19:19.923 "data_offset": 2048, 00:19:19.923 "data_size": 63488 00:19:19.923 }, 00:19:19.923 { 00:19:19.923 "name": "BaseBdev2", 00:19:19.923 "uuid": "e4d24017-557b-5248-95dd-4d1249b5a2da", 00:19:19.923 "is_configured": true, 00:19:19.923 "data_offset": 2048, 00:19:19.923 "data_size": 63488 00:19:19.923 } 00:19:19.923 ] 00:19:19.923 }' 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.923 23:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.185 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:20.185 23:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:20.185 [2024-12-09 23:00:55.498714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.126 "name": "raid_bdev1", 00:19:21.126 "uuid": "31ba491d-7f04-49e4-98a3-e4b81288ebe8", 00:19:21.126 "strip_size_kb": 64, 00:19:21.126 "state": "online", 00:19:21.126 "raid_level": "raid0", 00:19:21.126 "superblock": true, 00:19:21.126 "num_base_bdevs": 2, 00:19:21.126 "num_base_bdevs_discovered": 2, 00:19:21.126 "num_base_bdevs_operational": 2, 00:19:21.126 "base_bdevs_list": [ 00:19:21.126 { 00:19:21.126 "name": "BaseBdev1", 00:19:21.126 "uuid": "dc24c806-a78f-5ef6-bfa6-179e21f8f790", 00:19:21.126 "is_configured": true, 00:19:21.126 "data_offset": 2048, 00:19:21.126 "data_size": 63488 00:19:21.126 }, 00:19:21.126 { 00:19:21.126 "name": "BaseBdev2", 00:19:21.126 "uuid": "e4d24017-557b-5248-95dd-4d1249b5a2da", 00:19:21.126 "is_configured": true, 00:19:21.126 "data_offset": 2048, 00:19:21.126 "data_size": 63488 00:19:21.126 } 00:19:21.126 ] 00:19:21.126 }' 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.126 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.388 [2024-12-09 23:00:56.736356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.388 [2024-12-09 23:00:56.736394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.388 [2024-12-09 23:00:56.739466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.388 [2024-12-09 23:00:56.739513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.388 [2024-12-09 23:00:56.739547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.388 [2024-12-09 23:00:56.739559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:21.388 { 00:19:21.388 "results": [ 00:19:21.388 { 00:19:21.388 "job": "raid_bdev1", 00:19:21.388 "core_mask": "0x1", 00:19:21.388 "workload": "randrw", 00:19:21.388 "percentage": 50, 00:19:21.388 "status": "finished", 00:19:21.388 "queue_depth": 1, 00:19:21.388 "io_size": 131072, 00:19:21.388 "runtime": 1.235849, 00:19:21.388 "iops": 13791.328875938727, 00:19:21.388 "mibps": 1723.9161094923409, 00:19:21.388 "io_failed": 1, 00:19:21.388 "io_timeout": 0, 00:19:21.388 "avg_latency_us": 98.96600582169371, 00:19:21.388 "min_latency_us": 34.26461538461538, 00:19:21.388 "max_latency_us": 1701.4153846153847 00:19:21.388 } 00:19:21.388 ], 00:19:21.388 "core_count": 1 00:19:21.388 } 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60026 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 60026 ']' 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 60026 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.388 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60026 00:19:21.649 killing process with pid 60026 00:19:21.649 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.649 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.649 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60026' 00:19:21.649 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 60026 00:19:21.649 23:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 60026 00:19:21.649 [2024-12-09 23:00:56.766421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.649 [2024-12-09 23:00:56.849489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y2RIZc2ym7 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:19:22.592 00:19:22.592 real 0m3.552s 00:19:22.592 user 0m4.197s 00:19:22.592 sys 0m0.426s 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.592 ************************************ 00:19:22.592 END TEST raid_read_error_test 00:19:22.592 ************************************ 00:19:22.592 23:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.592 23:00:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:19:22.592 23:00:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:22.592 23:00:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.592 23:00:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.592 ************************************ 00:19:22.592 START TEST raid_write_error_test 00:19:22.592 ************************************ 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P9JXYtGoR7 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60161 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60161 00:19:22.592 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 60161 ']' 00:19:22.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.593 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.593 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.593 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.593 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.593 23:00:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.593 23:00:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:22.593 [2024-12-09 23:00:57.755054] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:22.593 [2024-12-09 23:00:57.755189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60161 ] 00:19:22.593 [2024-12-09 23:00:57.919019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.853 [2024-12-09 23:00:58.053570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.113 [2024-12-09 23:00:58.215927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.113 [2024-12-09 23:00:58.216005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.374 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.374 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:23.374 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:23.374 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 BaseBdev1_malloc 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 true 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 [2024-12-09 23:00:58.669055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:23.375 [2024-12-09 23:00:58.669300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.375 [2024-12-09 23:00:58.669334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:23.375 [2024-12-09 23:00:58.669347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.375 [2024-12-09 23:00:58.671787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.375 BaseBdev1 00:19:23.375 [2024-12-09 23:00:58.671971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 BaseBdev2_malloc 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 true 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 [2024-12-09 23:00:58.721959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:23.375 [2024-12-09 23:00:58.722212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.375 [2024-12-09 23:00:58.722243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:23.375 [2024-12-09 23:00:58.722255] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.375 [2024-12-09 23:00:58.724836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.375 [2024-12-09 23:00:58.725021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.375 BaseBdev2 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.375 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 [2024-12-09 23:00:58.734225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.639 [2024-12-09 23:00:58.737283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.639 [2024-12-09 23:00:58.737770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:23.639 [2024-12-09 23:00:58.737809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:23.639 [2024-12-09 23:00:58.738235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:23.639 [2024-12-09 23:00:58.738525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:23.639 [2024-12-09 23:00:58.738560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:23.639 [2024-12-09 23:00:58.738877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.639 "name": "raid_bdev1", 00:19:23.639 "uuid": "140c14cb-7415-414e-b572-18a7a6441cb9", 00:19:23.639 "strip_size_kb": 64, 00:19:23.639 "state": "online", 00:19:23.639 "raid_level": "raid0", 00:19:23.639 "superblock": true, 00:19:23.639 "num_base_bdevs": 2, 00:19:23.639 "num_base_bdevs_discovered": 2, 00:19:23.639 "num_base_bdevs_operational": 2, 00:19:23.639 "base_bdevs_list": [ 00:19:23.639 { 00:19:23.639 "name": "BaseBdev1", 00:19:23.639 "uuid": "de9e1a0f-ec33-5cfb-bd8d-5651dbaca564", 00:19:23.639 "is_configured": true, 00:19:23.639 "data_offset": 2048, 00:19:23.639 "data_size": 63488 00:19:23.639 }, 00:19:23.639 { 00:19:23.639 "name": "BaseBdev2", 00:19:23.639 "uuid": "0059a4aa-4b33-5d08-8db0-5caa4862fa1d", 00:19:23.639 "is_configured": true, 00:19:23.639 "data_offset": 2048, 00:19:23.639 "data_size": 63488 00:19:23.639 } 00:19:23.639 ] 00:19:23.639 }' 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.639 23:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.900 23:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:23.900 23:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:23.900 [2024-12-09 23:00:59.176028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.887 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.887 "name": "raid_bdev1", 00:19:24.887 "uuid": "140c14cb-7415-414e-b572-18a7a6441cb9", 00:19:24.887 "strip_size_kb": 64, 00:19:24.887 "state": "online", 00:19:24.887 "raid_level": "raid0", 00:19:24.887 "superblock": true, 00:19:24.887 "num_base_bdevs": 2, 00:19:24.887 "num_base_bdevs_discovered": 2, 00:19:24.887 "num_base_bdevs_operational": 2, 00:19:24.887 "base_bdevs_list": [ 00:19:24.887 { 00:19:24.887 "name": "BaseBdev1", 00:19:24.887 "uuid": "de9e1a0f-ec33-5cfb-bd8d-5651dbaca564", 00:19:24.887 "is_configured": true, 00:19:24.887 "data_offset": 2048, 00:19:24.887 "data_size": 63488 00:19:24.887 }, 00:19:24.887 { 00:19:24.887 "name": "BaseBdev2", 00:19:24.887 "uuid": "0059a4aa-4b33-5d08-8db0-5caa4862fa1d", 00:19:24.887 "is_configured": true, 00:19:24.887 "data_offset": 2048, 00:19:24.887 "data_size": 63488 00:19:24.887 } 00:19:24.888 ] 00:19:24.888 }' 00:19:24.888 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.888 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.148 [2024-12-09 23:01:00.435838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.148 [2024-12-09 23:01:00.435886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.148 [2024-12-09 23:01:00.439146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.148 [2024-12-09 23:01:00.439202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.148 [2024-12-09 23:01:00.439242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.148 [2024-12-09 23:01:00.439255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:25.148 { 00:19:25.148 "results": [ 00:19:25.148 { 00:19:25.148 "job": "raid_bdev1", 00:19:25.148 "core_mask": "0x1", 00:19:25.148 "workload": "randrw", 00:19:25.148 "percentage": 50, 00:19:25.148 "status": "finished", 00:19:25.148 "queue_depth": 1, 00:19:25.148 "io_size": 131072, 00:19:25.148 "runtime": 1.257738, 00:19:25.148 "iops": 12060.540430518915, 00:19:25.148 "mibps": 1507.5675538148644, 00:19:25.148 "io_failed": 1, 00:19:25.148 "io_timeout": 0, 00:19:25.148 "avg_latency_us": 114.70860098372293, 00:19:25.148 "min_latency_us": 34.26461538461538, 00:19:25.148 "max_latency_us": 1764.4307692307693 00:19:25.148 } 00:19:25.148 ], 00:19:25.148 "core_count": 1 00:19:25.148 } 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60161 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 60161 ']' 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 60161 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60161 00:19:25.148 killing process with pid 60161 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60161' 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 60161 00:19:25.148 23:01:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 60161 00:19:25.148 [2024-12-09 23:01:00.471488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.409 [2024-12-09 23:01:00.567786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P9JXYtGoR7 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:19:26.352 00:19:26.352 real 0m3.765s 00:19:26.352 user 0m4.440s 00:19:26.352 sys 0m0.475s 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.352 23:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.352 ************************************ 00:19:26.352 END TEST raid_write_error_test 00:19:26.352 ************************************ 00:19:26.352 23:01:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:26.352 23:01:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:19:26.352 23:01:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:26.352 23:01:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.352 23:01:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.352 ************************************ 00:19:26.352 START TEST raid_state_function_test 00:19:26.352 ************************************ 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:26.352 Process raid pid: 60293 00:19:26.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60293 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60293' 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60293 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60293 ']' 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:26.352 23:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.352 [2024-12-09 23:01:01.586019] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:26.352 [2024-12-09 23:01:01.586467] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.626 [2024-12-09 23:01:01.749470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.626 [2024-12-09 23:01:01.894710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.893 [2024-12-09 23:01:02.067603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.893 [2024-12-09 23:01:02.067867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.181 [2024-12-09 23:01:02.473319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:27.181 [2024-12-09 23:01:02.473543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:27.181 [2024-12-09 23:01:02.473566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.181 [2024-12-09 23:01:02.473577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.181 "name": "Existed_Raid", 00:19:27.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.181 "strip_size_kb": 64, 00:19:27.181 "state": "configuring", 00:19:27.181 "raid_level": "concat", 00:19:27.181 "superblock": false, 00:19:27.181 "num_base_bdevs": 2, 00:19:27.181 "num_base_bdevs_discovered": 0, 00:19:27.181 "num_base_bdevs_operational": 2, 00:19:27.181 "base_bdevs_list": [ 00:19:27.181 { 00:19:27.181 "name": "BaseBdev1", 00:19:27.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.181 "is_configured": false, 00:19:27.181 "data_offset": 0, 00:19:27.181 "data_size": 0 00:19:27.181 }, 00:19:27.181 { 00:19:27.181 "name": "BaseBdev2", 00:19:27.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.181 "is_configured": false, 00:19:27.181 "data_offset": 0, 00:19:27.181 "data_size": 0 00:19:27.181 } 00:19:27.181 ] 00:19:27.181 }' 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.181 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.503 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:27.503 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.503 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.503 [2024-12-09 23:01:02.817439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.503 [2024-12-09 23:01:02.817484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:27.503 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.503 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:27.503 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.504 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.504 [2024-12-09 23:01:02.829451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:27.504 [2024-12-09 23:01:02.829645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:27.504 [2024-12-09 23:01:02.829715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.504 [2024-12-09 23:01:02.829750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.504 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.504 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:27.504 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.504 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.767 [2024-12-09 23:01:02.872794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.767 BaseBdev1 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.767 [ 00:19:27.767 { 00:19:27.767 "name": "BaseBdev1", 00:19:27.767 "aliases": [ 00:19:27.767 "d81ba035-293c-4f70-bb54-eeba43315130" 00:19:27.767 ], 00:19:27.767 "product_name": "Malloc disk", 00:19:27.767 "block_size": 512, 00:19:27.767 "num_blocks": 65536, 00:19:27.767 "uuid": "d81ba035-293c-4f70-bb54-eeba43315130", 00:19:27.767 "assigned_rate_limits": { 00:19:27.767 "rw_ios_per_sec": 0, 00:19:27.767 "rw_mbytes_per_sec": 0, 00:19:27.767 "r_mbytes_per_sec": 0, 00:19:27.767 "w_mbytes_per_sec": 0 00:19:27.767 }, 00:19:27.767 "claimed": true, 00:19:27.767 "claim_type": "exclusive_write", 00:19:27.767 "zoned": false, 00:19:27.767 "supported_io_types": { 00:19:27.767 "read": true, 00:19:27.767 "write": true, 00:19:27.767 "unmap": true, 00:19:27.767 "flush": true, 00:19:27.767 "reset": true, 00:19:27.767 "nvme_admin": false, 00:19:27.767 "nvme_io": false, 00:19:27.767 "nvme_io_md": false, 00:19:27.767 "write_zeroes": true, 00:19:27.767 "zcopy": true, 00:19:27.767 "get_zone_info": false, 00:19:27.767 "zone_management": false, 00:19:27.767 "zone_append": false, 00:19:27.767 "compare": false, 00:19:27.767 "compare_and_write": false, 00:19:27.767 "abort": true, 00:19:27.767 "seek_hole": false, 00:19:27.767 "seek_data": false, 00:19:27.767 "copy": true, 00:19:27.767 "nvme_iov_md": false 00:19:27.767 }, 00:19:27.767 "memory_domains": [ 00:19:27.767 { 00:19:27.767 "dma_device_id": "system", 00:19:27.767 "dma_device_type": 1 00:19:27.767 }, 00:19:27.767 { 00:19:27.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.767 "dma_device_type": 2 00:19:27.767 } 00:19:27.767 ], 00:19:27.767 "driver_specific": {} 00:19:27.767 } 00:19:27.767 ] 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.767 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.768 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.768 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.768 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.768 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.768 "name": "Existed_Raid", 00:19:27.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.768 "strip_size_kb": 64, 00:19:27.768 "state": "configuring", 00:19:27.768 "raid_level": "concat", 00:19:27.768 "superblock": false, 00:19:27.768 "num_base_bdevs": 2, 00:19:27.768 "num_base_bdevs_discovered": 1, 00:19:27.768 "num_base_bdevs_operational": 2, 00:19:27.768 "base_bdevs_list": [ 00:19:27.768 { 00:19:27.768 "name": "BaseBdev1", 00:19:27.768 "uuid": "d81ba035-293c-4f70-bb54-eeba43315130", 00:19:27.768 "is_configured": true, 00:19:27.768 "data_offset": 0, 00:19:27.768 "data_size": 65536 00:19:27.768 }, 00:19:27.768 { 00:19:27.768 "name": "BaseBdev2", 00:19:27.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.768 "is_configured": false, 00:19:27.768 "data_offset": 0, 00:19:27.768 "data_size": 0 00:19:27.768 } 00:19:27.768 ] 00:19:27.768 }' 00:19:27.768 23:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.768 23:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 [2024-12-09 23:01:03.236931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.028 [2024-12-09 23:01:03.237009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 [2024-12-09 23:01:03.245010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.028 [2024-12-09 23:01:03.247402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.028 [2024-12-09 23:01:03.247465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.028 "name": "Existed_Raid", 00:19:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.028 "strip_size_kb": 64, 00:19:28.028 "state": "configuring", 00:19:28.028 "raid_level": "concat", 00:19:28.028 "superblock": false, 00:19:28.028 "num_base_bdevs": 2, 00:19:28.028 "num_base_bdevs_discovered": 1, 00:19:28.028 "num_base_bdevs_operational": 2, 00:19:28.028 "base_bdevs_list": [ 00:19:28.028 { 00:19:28.028 "name": "BaseBdev1", 00:19:28.028 "uuid": "d81ba035-293c-4f70-bb54-eeba43315130", 00:19:28.028 "is_configured": true, 00:19:28.028 "data_offset": 0, 00:19:28.028 "data_size": 65536 00:19:28.028 }, 00:19:28.028 { 00:19:28.028 "name": "BaseBdev2", 00:19:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.028 "is_configured": false, 00:19:28.028 "data_offset": 0, 00:19:28.028 "data_size": 0 00:19:28.028 } 00:19:28.028 ] 00:19:28.028 }' 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.028 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.288 [2024-12-09 23:01:03.613073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.288 [2024-12-09 23:01:03.613168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:28.288 [2024-12-09 23:01:03.613178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:28.288 [2024-12-09 23:01:03.613501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:28.288 BaseBdev2 00:19:28.288 [2024-12-09 23:01:03.613683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:28.288 [2024-12-09 23:01:03.613702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:28.288 [2024-12-09 23:01:03.614015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.288 [ 00:19:28.288 { 00:19:28.288 "name": "BaseBdev2", 00:19:28.288 "aliases": [ 00:19:28.288 "5fd960af-cd5f-499a-9bf8-fed8afdf9ccd" 00:19:28.288 ], 00:19:28.288 "product_name": "Malloc disk", 00:19:28.288 "block_size": 512, 00:19:28.288 "num_blocks": 65536, 00:19:28.288 "uuid": "5fd960af-cd5f-499a-9bf8-fed8afdf9ccd", 00:19:28.288 "assigned_rate_limits": { 00:19:28.288 "rw_ios_per_sec": 0, 00:19:28.288 "rw_mbytes_per_sec": 0, 00:19:28.288 "r_mbytes_per_sec": 0, 00:19:28.288 "w_mbytes_per_sec": 0 00:19:28.288 }, 00:19:28.288 "claimed": true, 00:19:28.288 "claim_type": "exclusive_write", 00:19:28.288 "zoned": false, 00:19:28.288 "supported_io_types": { 00:19:28.288 "read": true, 00:19:28.288 "write": true, 00:19:28.288 "unmap": true, 00:19:28.288 "flush": true, 00:19:28.288 "reset": true, 00:19:28.288 "nvme_admin": false, 00:19:28.288 "nvme_io": false, 00:19:28.288 "nvme_io_md": false, 00:19:28.288 "write_zeroes": true, 00:19:28.288 "zcopy": true, 00:19:28.288 "get_zone_info": false, 00:19:28.288 "zone_management": false, 00:19:28.288 "zone_append": false, 00:19:28.288 "compare": false, 00:19:28.288 "compare_and_write": false, 00:19:28.288 "abort": true, 00:19:28.288 "seek_hole": false, 00:19:28.288 "seek_data": false, 00:19:28.288 "copy": true, 00:19:28.288 "nvme_iov_md": false 00:19:28.288 }, 00:19:28.288 "memory_domains": [ 00:19:28.288 { 00:19:28.288 "dma_device_id": "system", 00:19:28.288 "dma_device_type": 1 00:19:28.288 }, 00:19:28.288 { 00:19:28.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.288 "dma_device_type": 2 00:19:28.288 } 00:19:28.288 ], 00:19:28.288 "driver_specific": {} 00:19:28.288 } 00:19:28.288 ] 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.288 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.548 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.548 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.548 "name": "Existed_Raid", 00:19:28.548 "uuid": "c4241f1b-19a3-4472-93e0-2aa076b6a4a6", 00:19:28.548 "strip_size_kb": 64, 00:19:28.548 "state": "online", 00:19:28.548 "raid_level": "concat", 00:19:28.548 "superblock": false, 00:19:28.548 "num_base_bdevs": 2, 00:19:28.548 "num_base_bdevs_discovered": 2, 00:19:28.548 "num_base_bdevs_operational": 2, 00:19:28.548 "base_bdevs_list": [ 00:19:28.548 { 00:19:28.548 "name": "BaseBdev1", 00:19:28.548 "uuid": "d81ba035-293c-4f70-bb54-eeba43315130", 00:19:28.548 "is_configured": true, 00:19:28.548 "data_offset": 0, 00:19:28.548 "data_size": 65536 00:19:28.548 }, 00:19:28.548 { 00:19:28.548 "name": "BaseBdev2", 00:19:28.548 "uuid": "5fd960af-cd5f-499a-9bf8-fed8afdf9ccd", 00:19:28.548 "is_configured": true, 00:19:28.548 "data_offset": 0, 00:19:28.548 "data_size": 65536 00:19:28.548 } 00:19:28.548 ] 00:19:28.548 }' 00:19:28.548 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.548 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:28.808 [2024-12-09 23:01:03.957531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.808 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:28.808 "name": "Existed_Raid", 00:19:28.808 "aliases": [ 00:19:28.808 "c4241f1b-19a3-4472-93e0-2aa076b6a4a6" 00:19:28.808 ], 00:19:28.808 "product_name": "Raid Volume", 00:19:28.808 "block_size": 512, 00:19:28.808 "num_blocks": 131072, 00:19:28.808 "uuid": "c4241f1b-19a3-4472-93e0-2aa076b6a4a6", 00:19:28.808 "assigned_rate_limits": { 00:19:28.808 "rw_ios_per_sec": 0, 00:19:28.808 "rw_mbytes_per_sec": 0, 00:19:28.808 "r_mbytes_per_sec": 0, 00:19:28.808 "w_mbytes_per_sec": 0 00:19:28.808 }, 00:19:28.808 "claimed": false, 00:19:28.808 "zoned": false, 00:19:28.808 "supported_io_types": { 00:19:28.808 "read": true, 00:19:28.808 "write": true, 00:19:28.808 "unmap": true, 00:19:28.808 "flush": true, 00:19:28.808 "reset": true, 00:19:28.808 "nvme_admin": false, 00:19:28.808 "nvme_io": false, 00:19:28.808 "nvme_io_md": false, 00:19:28.808 "write_zeroes": true, 00:19:28.808 "zcopy": false, 00:19:28.808 "get_zone_info": false, 00:19:28.808 "zone_management": false, 00:19:28.808 "zone_append": false, 00:19:28.808 "compare": false, 00:19:28.808 "compare_and_write": false, 00:19:28.808 "abort": false, 00:19:28.808 "seek_hole": false, 00:19:28.808 "seek_data": false, 00:19:28.808 "copy": false, 00:19:28.808 "nvme_iov_md": false 00:19:28.808 }, 00:19:28.808 "memory_domains": [ 00:19:28.808 { 00:19:28.808 "dma_device_id": "system", 00:19:28.808 "dma_device_type": 1 00:19:28.808 }, 00:19:28.808 { 00:19:28.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.808 "dma_device_type": 2 00:19:28.808 }, 00:19:28.808 { 00:19:28.808 "dma_device_id": "system", 00:19:28.808 "dma_device_type": 1 00:19:28.808 }, 00:19:28.808 { 00:19:28.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.808 "dma_device_type": 2 00:19:28.808 } 00:19:28.808 ], 00:19:28.808 "driver_specific": { 00:19:28.808 "raid": { 00:19:28.808 "uuid": "c4241f1b-19a3-4472-93e0-2aa076b6a4a6", 00:19:28.808 "strip_size_kb": 64, 00:19:28.808 "state": "online", 00:19:28.808 "raid_level": "concat", 00:19:28.809 "superblock": false, 00:19:28.809 "num_base_bdevs": 2, 00:19:28.809 "num_base_bdevs_discovered": 2, 00:19:28.809 "num_base_bdevs_operational": 2, 00:19:28.809 "base_bdevs_list": [ 00:19:28.809 { 00:19:28.809 "name": "BaseBdev1", 00:19:28.809 "uuid": "d81ba035-293c-4f70-bb54-eeba43315130", 00:19:28.809 "is_configured": true, 00:19:28.809 "data_offset": 0, 00:19:28.809 "data_size": 65536 00:19:28.809 }, 00:19:28.809 { 00:19:28.809 "name": "BaseBdev2", 00:19:28.809 "uuid": "5fd960af-cd5f-499a-9bf8-fed8afdf9ccd", 00:19:28.809 "is_configured": true, 00:19:28.809 "data_offset": 0, 00:19:28.809 "data_size": 65536 00:19:28.809 } 00:19:28.809 ] 00:19:28.809 } 00:19:28.809 } 00:19:28.809 }' 00:19:28.809 23:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:28.809 BaseBdev2' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.809 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.809 [2024-12-09 23:01:04.113293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.809 [2024-12-09 23:01:04.113456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.809 [2024-12-09 23:01:04.113577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.069 "name": "Existed_Raid", 00:19:29.069 "uuid": "c4241f1b-19a3-4472-93e0-2aa076b6a4a6", 00:19:29.069 "strip_size_kb": 64, 00:19:29.069 "state": "offline", 00:19:29.069 "raid_level": "concat", 00:19:29.069 "superblock": false, 00:19:29.069 "num_base_bdevs": 2, 00:19:29.069 "num_base_bdevs_discovered": 1, 00:19:29.069 "num_base_bdevs_operational": 1, 00:19:29.069 "base_bdevs_list": [ 00:19:29.069 { 00:19:29.069 "name": null, 00:19:29.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.069 "is_configured": false, 00:19:29.069 "data_offset": 0, 00:19:29.069 "data_size": 65536 00:19:29.069 }, 00:19:29.069 { 00:19:29.069 "name": "BaseBdev2", 00:19:29.069 "uuid": "5fd960af-cd5f-499a-9bf8-fed8afdf9ccd", 00:19:29.069 "is_configured": true, 00:19:29.069 "data_offset": 0, 00:19:29.069 "data_size": 65536 00:19:29.069 } 00:19:29.069 ] 00:19:29.069 }' 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.069 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.331 [2024-12-09 23:01:04.560350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.331 [2024-12-09 23:01:04.560412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60293 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60293 ']' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60293 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60293 00:19:29.331 killing process with pid 60293 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60293' 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60293 00:19:29.331 23:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60293 00:19:29.331 [2024-12-09 23:01:04.686167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:29.594 [2024-12-09 23:01:04.697816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.167 23:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:30.167 00:19:30.167 real 0m3.995s 00:19:30.167 user 0m5.592s 00:19:30.167 sys 0m0.733s 00:19:30.167 23:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.167 ************************************ 00:19:30.167 END TEST raid_state_function_test 00:19:30.167 ************************************ 00:19:30.167 23:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.428 23:01:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:19:30.428 23:01:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:30.428 23:01:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.428 23:01:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.428 ************************************ 00:19:30.428 START TEST raid_state_function_test_sb 00:19:30.428 ************************************ 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:30.428 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60537 00:19:30.429 Process raid pid: 60537 00:19:30.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60537' 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60537 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60537 ']' 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:30.429 23:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.429 [2024-12-09 23:01:05.665470] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:30.429 [2024-12-09 23:01:05.665896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.689 [2024-12-09 23:01:05.830787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.689 [2024-12-09 23:01:05.986249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.948 [2024-12-09 23:01:06.159427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.948 [2024-12-09 23:01:06.159485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.520 [2024-12-09 23:01:06.645438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.520 [2024-12-09 23:01:06.645510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.520 [2024-12-09 23:01:06.645523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.520 [2024-12-09 23:01:06.645533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.520 "name": "Existed_Raid", 00:19:31.520 "uuid": "9019756a-4090-4cd9-a1cd-1e6e15af8f50", 00:19:31.520 "strip_size_kb": 64, 00:19:31.520 "state": "configuring", 00:19:31.520 "raid_level": "concat", 00:19:31.520 "superblock": true, 00:19:31.520 "num_base_bdevs": 2, 00:19:31.520 "num_base_bdevs_discovered": 0, 00:19:31.520 "num_base_bdevs_operational": 2, 00:19:31.520 "base_bdevs_list": [ 00:19:31.520 { 00:19:31.520 "name": "BaseBdev1", 00:19:31.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.520 "is_configured": false, 00:19:31.520 "data_offset": 0, 00:19:31.520 "data_size": 0 00:19:31.520 }, 00:19:31.520 { 00:19:31.520 "name": "BaseBdev2", 00:19:31.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.520 "is_configured": false, 00:19:31.520 "data_offset": 0, 00:19:31.520 "data_size": 0 00:19:31.520 } 00:19:31.520 ] 00:19:31.520 }' 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.520 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 [2024-12-09 23:01:06.973441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.839 [2024-12-09 23:01:06.973494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 [2024-12-09 23:01:06.981442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.839 [2024-12-09 23:01:06.981501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.839 [2024-12-09 23:01:06.981512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.839 [2024-12-09 23:01:06.981526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.839 23:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 [2024-12-09 23:01:07.020523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.839 BaseBdev1 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.839 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 [ 00:19:31.839 { 00:19:31.839 "name": "BaseBdev1", 00:19:31.839 "aliases": [ 00:19:31.839 "a0747043-6b3f-41d7-9432-0ac4e6e8ae32" 00:19:31.839 ], 00:19:31.839 "product_name": "Malloc disk", 00:19:31.839 "block_size": 512, 00:19:31.839 "num_blocks": 65536, 00:19:31.839 "uuid": "a0747043-6b3f-41d7-9432-0ac4e6e8ae32", 00:19:31.839 "assigned_rate_limits": { 00:19:31.839 "rw_ios_per_sec": 0, 00:19:31.840 "rw_mbytes_per_sec": 0, 00:19:31.840 "r_mbytes_per_sec": 0, 00:19:31.840 "w_mbytes_per_sec": 0 00:19:31.840 }, 00:19:31.840 "claimed": true, 00:19:31.840 "claim_type": "exclusive_write", 00:19:31.840 "zoned": false, 00:19:31.840 "supported_io_types": { 00:19:31.840 "read": true, 00:19:31.840 "write": true, 00:19:31.840 "unmap": true, 00:19:31.840 "flush": true, 00:19:31.840 "reset": true, 00:19:31.840 "nvme_admin": false, 00:19:31.840 "nvme_io": false, 00:19:31.840 "nvme_io_md": false, 00:19:31.840 "write_zeroes": true, 00:19:31.840 "zcopy": true, 00:19:31.840 "get_zone_info": false, 00:19:31.840 "zone_management": false, 00:19:31.840 "zone_append": false, 00:19:31.840 "compare": false, 00:19:31.840 "compare_and_write": false, 00:19:31.840 "abort": true, 00:19:31.840 "seek_hole": false, 00:19:31.840 "seek_data": false, 00:19:31.840 "copy": true, 00:19:31.840 "nvme_iov_md": false 00:19:31.840 }, 00:19:31.840 "memory_domains": [ 00:19:31.840 { 00:19:31.840 "dma_device_id": "system", 00:19:31.840 "dma_device_type": 1 00:19:31.840 }, 00:19:31.840 { 00:19:31.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.840 "dma_device_type": 2 00:19:31.840 } 00:19:31.840 ], 00:19:31.840 "driver_specific": {} 00:19:31.840 } 00:19:31.840 ] 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.840 "name": "Existed_Raid", 00:19:31.840 "uuid": "c836df3c-1674-40a5-b81e-21cd415a81e3", 00:19:31.840 "strip_size_kb": 64, 00:19:31.840 "state": "configuring", 00:19:31.840 "raid_level": "concat", 00:19:31.840 "superblock": true, 00:19:31.840 "num_base_bdevs": 2, 00:19:31.840 "num_base_bdevs_discovered": 1, 00:19:31.840 "num_base_bdevs_operational": 2, 00:19:31.840 "base_bdevs_list": [ 00:19:31.840 { 00:19:31.840 "name": "BaseBdev1", 00:19:31.840 "uuid": "a0747043-6b3f-41d7-9432-0ac4e6e8ae32", 00:19:31.840 "is_configured": true, 00:19:31.840 "data_offset": 2048, 00:19:31.840 "data_size": 63488 00:19:31.840 }, 00:19:31.840 { 00:19:31.840 "name": "BaseBdev2", 00:19:31.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.840 "is_configured": false, 00:19:31.840 "data_offset": 0, 00:19:31.840 "data_size": 0 00:19:31.840 } 00:19:31.840 ] 00:19:31.840 }' 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.840 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.134 [2024-12-09 23:01:07.364650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.134 [2024-12-09 23:01:07.364723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.134 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.134 [2024-12-09 23:01:07.372722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.134 [2024-12-09 23:01:07.374958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.135 [2024-12-09 23:01:07.375018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.135 "name": "Existed_Raid", 00:19:32.135 "uuid": "a5ed66f7-9594-4a17-b6e0-08d48b2e3e7e", 00:19:32.135 "strip_size_kb": 64, 00:19:32.135 "state": "configuring", 00:19:32.135 "raid_level": "concat", 00:19:32.135 "superblock": true, 00:19:32.135 "num_base_bdevs": 2, 00:19:32.135 "num_base_bdevs_discovered": 1, 00:19:32.135 "num_base_bdevs_operational": 2, 00:19:32.135 "base_bdevs_list": [ 00:19:32.135 { 00:19:32.135 "name": "BaseBdev1", 00:19:32.135 "uuid": "a0747043-6b3f-41d7-9432-0ac4e6e8ae32", 00:19:32.135 "is_configured": true, 00:19:32.135 "data_offset": 2048, 00:19:32.135 "data_size": 63488 00:19:32.135 }, 00:19:32.135 { 00:19:32.135 "name": "BaseBdev2", 00:19:32.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.135 "is_configured": false, 00:19:32.135 "data_offset": 0, 00:19:32.135 "data_size": 0 00:19:32.135 } 00:19:32.135 ] 00:19:32.135 }' 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.135 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.396 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:32.396 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.396 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.396 [2024-12-09 23:01:07.757087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.396 [2024-12-09 23:01:07.757386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:32.396 [2024-12-09 23:01:07.757402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:32.657 [2024-12-09 23:01:07.757700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:32.657 [2024-12-09 23:01:07.757873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:32.657 [2024-12-09 23:01:07.757887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:32.657 [2024-12-09 23:01:07.758031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.657 BaseBdev2 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.657 [ 00:19:32.657 { 00:19:32.657 "name": "BaseBdev2", 00:19:32.657 "aliases": [ 00:19:32.657 "b6633eb6-33ae-4fec-8314-3b6884962940" 00:19:32.657 ], 00:19:32.657 "product_name": "Malloc disk", 00:19:32.657 "block_size": 512, 00:19:32.657 "num_blocks": 65536, 00:19:32.657 "uuid": "b6633eb6-33ae-4fec-8314-3b6884962940", 00:19:32.657 "assigned_rate_limits": { 00:19:32.657 "rw_ios_per_sec": 0, 00:19:32.657 "rw_mbytes_per_sec": 0, 00:19:32.657 "r_mbytes_per_sec": 0, 00:19:32.657 "w_mbytes_per_sec": 0 00:19:32.657 }, 00:19:32.657 "claimed": true, 00:19:32.657 "claim_type": "exclusive_write", 00:19:32.657 "zoned": false, 00:19:32.657 "supported_io_types": { 00:19:32.657 "read": true, 00:19:32.657 "write": true, 00:19:32.657 "unmap": true, 00:19:32.657 "flush": true, 00:19:32.657 "reset": true, 00:19:32.657 "nvme_admin": false, 00:19:32.657 "nvme_io": false, 00:19:32.657 "nvme_io_md": false, 00:19:32.657 "write_zeroes": true, 00:19:32.657 "zcopy": true, 00:19:32.657 "get_zone_info": false, 00:19:32.657 "zone_management": false, 00:19:32.657 "zone_append": false, 00:19:32.657 "compare": false, 00:19:32.657 "compare_and_write": false, 00:19:32.657 "abort": true, 00:19:32.657 "seek_hole": false, 00:19:32.657 "seek_data": false, 00:19:32.657 "copy": true, 00:19:32.657 "nvme_iov_md": false 00:19:32.657 }, 00:19:32.657 "memory_domains": [ 00:19:32.657 { 00:19:32.657 "dma_device_id": "system", 00:19:32.657 "dma_device_type": 1 00:19:32.657 }, 00:19:32.657 { 00:19:32.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.657 "dma_device_type": 2 00:19:32.657 } 00:19:32.657 ], 00:19:32.657 "driver_specific": {} 00:19:32.657 } 00:19:32.657 ] 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.657 "name": "Existed_Raid", 00:19:32.657 "uuid": "a5ed66f7-9594-4a17-b6e0-08d48b2e3e7e", 00:19:32.657 "strip_size_kb": 64, 00:19:32.657 "state": "online", 00:19:32.657 "raid_level": "concat", 00:19:32.657 "superblock": true, 00:19:32.657 "num_base_bdevs": 2, 00:19:32.657 "num_base_bdevs_discovered": 2, 00:19:32.657 "num_base_bdevs_operational": 2, 00:19:32.657 "base_bdevs_list": [ 00:19:32.657 { 00:19:32.657 "name": "BaseBdev1", 00:19:32.657 "uuid": "a0747043-6b3f-41d7-9432-0ac4e6e8ae32", 00:19:32.657 "is_configured": true, 00:19:32.657 "data_offset": 2048, 00:19:32.657 "data_size": 63488 00:19:32.657 }, 00:19:32.657 { 00:19:32.657 "name": "BaseBdev2", 00:19:32.657 "uuid": "b6633eb6-33ae-4fec-8314-3b6884962940", 00:19:32.657 "is_configured": true, 00:19:32.657 "data_offset": 2048, 00:19:32.657 "data_size": 63488 00:19:32.657 } 00:19:32.657 ] 00:19:32.657 }' 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.657 23:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.918 [2024-12-09 23:01:08.097563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.918 "name": "Existed_Raid", 00:19:32.918 "aliases": [ 00:19:32.918 "a5ed66f7-9594-4a17-b6e0-08d48b2e3e7e" 00:19:32.918 ], 00:19:32.918 "product_name": "Raid Volume", 00:19:32.918 "block_size": 512, 00:19:32.918 "num_blocks": 126976, 00:19:32.918 "uuid": "a5ed66f7-9594-4a17-b6e0-08d48b2e3e7e", 00:19:32.918 "assigned_rate_limits": { 00:19:32.918 "rw_ios_per_sec": 0, 00:19:32.918 "rw_mbytes_per_sec": 0, 00:19:32.918 "r_mbytes_per_sec": 0, 00:19:32.918 "w_mbytes_per_sec": 0 00:19:32.918 }, 00:19:32.918 "claimed": false, 00:19:32.918 "zoned": false, 00:19:32.918 "supported_io_types": { 00:19:32.918 "read": true, 00:19:32.918 "write": true, 00:19:32.918 "unmap": true, 00:19:32.918 "flush": true, 00:19:32.918 "reset": true, 00:19:32.918 "nvme_admin": false, 00:19:32.918 "nvme_io": false, 00:19:32.918 "nvme_io_md": false, 00:19:32.918 "write_zeroes": true, 00:19:32.918 "zcopy": false, 00:19:32.918 "get_zone_info": false, 00:19:32.918 "zone_management": false, 00:19:32.918 "zone_append": false, 00:19:32.918 "compare": false, 00:19:32.918 "compare_and_write": false, 00:19:32.918 "abort": false, 00:19:32.918 "seek_hole": false, 00:19:32.918 "seek_data": false, 00:19:32.918 "copy": false, 00:19:32.918 "nvme_iov_md": false 00:19:32.918 }, 00:19:32.918 "memory_domains": [ 00:19:32.918 { 00:19:32.918 "dma_device_id": "system", 00:19:32.918 "dma_device_type": 1 00:19:32.918 }, 00:19:32.918 { 00:19:32.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.918 "dma_device_type": 2 00:19:32.918 }, 00:19:32.918 { 00:19:32.918 "dma_device_id": "system", 00:19:32.918 "dma_device_type": 1 00:19:32.918 }, 00:19:32.918 { 00:19:32.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.918 "dma_device_type": 2 00:19:32.918 } 00:19:32.918 ], 00:19:32.918 "driver_specific": { 00:19:32.918 "raid": { 00:19:32.918 "uuid": "a5ed66f7-9594-4a17-b6e0-08d48b2e3e7e", 00:19:32.918 "strip_size_kb": 64, 00:19:32.918 "state": "online", 00:19:32.918 "raid_level": "concat", 00:19:32.918 "superblock": true, 00:19:32.918 "num_base_bdevs": 2, 00:19:32.918 "num_base_bdevs_discovered": 2, 00:19:32.918 "num_base_bdevs_operational": 2, 00:19:32.918 "base_bdevs_list": [ 00:19:32.918 { 00:19:32.918 "name": "BaseBdev1", 00:19:32.918 "uuid": "a0747043-6b3f-41d7-9432-0ac4e6e8ae32", 00:19:32.918 "is_configured": true, 00:19:32.918 "data_offset": 2048, 00:19:32.918 "data_size": 63488 00:19:32.918 }, 00:19:32.918 { 00:19:32.918 "name": "BaseBdev2", 00:19:32.918 "uuid": "b6633eb6-33ae-4fec-8314-3b6884962940", 00:19:32.918 "is_configured": true, 00:19:32.918 "data_offset": 2048, 00:19:32.918 "data_size": 63488 00:19:32.918 } 00:19:32.918 ] 00:19:32.918 } 00:19:32.918 } 00:19:32.918 }' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.918 BaseBdev2' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.918 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.918 [2024-12-09 23:01:08.277341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.918 [2024-12-09 23:01:08.277388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.918 [2024-12-09 23:01:08.277445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.180 "name": "Existed_Raid", 00:19:33.180 "uuid": "a5ed66f7-9594-4a17-b6e0-08d48b2e3e7e", 00:19:33.180 "strip_size_kb": 64, 00:19:33.180 "state": "offline", 00:19:33.180 "raid_level": "concat", 00:19:33.180 "superblock": true, 00:19:33.180 "num_base_bdevs": 2, 00:19:33.180 "num_base_bdevs_discovered": 1, 00:19:33.180 "num_base_bdevs_operational": 1, 00:19:33.180 "base_bdevs_list": [ 00:19:33.180 { 00:19:33.180 "name": null, 00:19:33.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.180 "is_configured": false, 00:19:33.180 "data_offset": 0, 00:19:33.180 "data_size": 63488 00:19:33.180 }, 00:19:33.180 { 00:19:33.180 "name": "BaseBdev2", 00:19:33.180 "uuid": "b6633eb6-33ae-4fec-8314-3b6884962940", 00:19:33.180 "is_configured": true, 00:19:33.180 "data_offset": 2048, 00:19:33.180 "data_size": 63488 00:19:33.180 } 00:19:33.180 ] 00:19:33.180 }' 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.180 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.442 [2024-12-09 23:01:08.703802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.442 [2024-12-09 23:01:08.703877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.442 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60537 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60537 ']' 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60537 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60537 00:19:33.703 killing process with pid 60537 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60537' 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60537 00:19:33.703 23:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60537 00:19:33.703 [2024-12-09 23:01:08.841820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.703 [2024-12-09 23:01:08.853844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.646 23:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:34.646 00:19:34.646 real 0m4.095s 00:19:34.646 user 0m5.788s 00:19:34.646 sys 0m0.716s 00:19:34.646 ************************************ 00:19:34.646 END TEST raid_state_function_test_sb 00:19:34.646 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.646 23:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.646 ************************************ 00:19:34.646 23:01:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:19:34.646 23:01:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:34.646 23:01:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.646 23:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.646 ************************************ 00:19:34.646 START TEST raid_superblock_test 00:19:34.646 ************************************ 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:34.646 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60780 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60780 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60780 ']' 00:19:34.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.647 23:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:34.647 [2024-12-09 23:01:09.833676] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:34.647 [2024-12-09 23:01:09.833842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:19:34.647 [2024-12-09 23:01:09.998766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.909 [2024-12-09 23:01:10.146142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.169 [2024-12-09 23:01:10.312384] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.169 [2024-12-09 23:01:10.312470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:35.429 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.430 malloc1 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.430 [2024-12-09 23:01:10.760133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.430 [2024-12-09 23:01:10.760211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.430 [2024-12-09 23:01:10.760238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:35.430 [2024-12-09 23:01:10.760250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.430 [2024-12-09 23:01:10.762825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.430 [2024-12-09 23:01:10.762879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.430 pt1 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.430 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.691 malloc2 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.691 [2024-12-09 23:01:10.801019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.691 [2024-12-09 23:01:10.801088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.691 [2024-12-09 23:01:10.801134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:35.691 [2024-12-09 23:01:10.801144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.691 [2024-12-09 23:01:10.803685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.691 [2024-12-09 23:01:10.803735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.691 pt2 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.691 [2024-12-09 23:01:10.809137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:35.691 [2024-12-09 23:01:10.811323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.691 [2024-12-09 23:01:10.811533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:35.691 [2024-12-09 23:01:10.811549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:35.691 [2024-12-09 23:01:10.811864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:35.691 [2024-12-09 23:01:10.812040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:35.691 [2024-12-09 23:01:10.812052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:35.691 [2024-12-09 23:01:10.812268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.691 "name": "raid_bdev1", 00:19:35.691 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:35.691 "strip_size_kb": 64, 00:19:35.691 "state": "online", 00:19:35.691 "raid_level": "concat", 00:19:35.691 "superblock": true, 00:19:35.691 "num_base_bdevs": 2, 00:19:35.691 "num_base_bdevs_discovered": 2, 00:19:35.691 "num_base_bdevs_operational": 2, 00:19:35.691 "base_bdevs_list": [ 00:19:35.691 { 00:19:35.691 "name": "pt1", 00:19:35.691 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 2048, 00:19:35.691 "data_size": 63488 00:19:35.691 }, 00:19:35.691 { 00:19:35.691 "name": "pt2", 00:19:35.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 2048, 00:19:35.691 "data_size": 63488 00:19:35.691 } 00:19:35.691 ] 00:19:35.691 }' 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.691 23:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.953 [2024-12-09 23:01:11.137466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.953 "name": "raid_bdev1", 00:19:35.953 "aliases": [ 00:19:35.953 "603ccb99-af2b-4b44-afd6-5537a5145368" 00:19:35.953 ], 00:19:35.953 "product_name": "Raid Volume", 00:19:35.953 "block_size": 512, 00:19:35.953 "num_blocks": 126976, 00:19:35.953 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:35.953 "assigned_rate_limits": { 00:19:35.953 "rw_ios_per_sec": 0, 00:19:35.953 "rw_mbytes_per_sec": 0, 00:19:35.953 "r_mbytes_per_sec": 0, 00:19:35.953 "w_mbytes_per_sec": 0 00:19:35.953 }, 00:19:35.953 "claimed": false, 00:19:35.953 "zoned": false, 00:19:35.953 "supported_io_types": { 00:19:35.953 "read": true, 00:19:35.953 "write": true, 00:19:35.953 "unmap": true, 00:19:35.953 "flush": true, 00:19:35.953 "reset": true, 00:19:35.953 "nvme_admin": false, 00:19:35.953 "nvme_io": false, 00:19:35.953 "nvme_io_md": false, 00:19:35.953 "write_zeroes": true, 00:19:35.953 "zcopy": false, 00:19:35.953 "get_zone_info": false, 00:19:35.953 "zone_management": false, 00:19:35.953 "zone_append": false, 00:19:35.953 "compare": false, 00:19:35.953 "compare_and_write": false, 00:19:35.953 "abort": false, 00:19:35.953 "seek_hole": false, 00:19:35.953 "seek_data": false, 00:19:35.953 "copy": false, 00:19:35.953 "nvme_iov_md": false 00:19:35.953 }, 00:19:35.953 "memory_domains": [ 00:19:35.953 { 00:19:35.953 "dma_device_id": "system", 00:19:35.953 "dma_device_type": 1 00:19:35.953 }, 00:19:35.953 { 00:19:35.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.953 "dma_device_type": 2 00:19:35.953 }, 00:19:35.953 { 00:19:35.953 "dma_device_id": "system", 00:19:35.953 "dma_device_type": 1 00:19:35.953 }, 00:19:35.953 { 00:19:35.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.953 "dma_device_type": 2 00:19:35.953 } 00:19:35.953 ], 00:19:35.953 "driver_specific": { 00:19:35.953 "raid": { 00:19:35.953 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:35.953 "strip_size_kb": 64, 00:19:35.953 "state": "online", 00:19:35.953 "raid_level": "concat", 00:19:35.953 "superblock": true, 00:19:35.953 "num_base_bdevs": 2, 00:19:35.953 "num_base_bdevs_discovered": 2, 00:19:35.953 "num_base_bdevs_operational": 2, 00:19:35.953 "base_bdevs_list": [ 00:19:35.953 { 00:19:35.953 "name": "pt1", 00:19:35.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.953 "is_configured": true, 00:19:35.953 "data_offset": 2048, 00:19:35.953 "data_size": 63488 00:19:35.953 }, 00:19:35.953 { 00:19:35.953 "name": "pt2", 00:19:35.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.953 "is_configured": true, 00:19:35.953 "data_offset": 2048, 00:19:35.953 "data_size": 63488 00:19:35.953 } 00:19:35.953 ] 00:19:35.953 } 00:19:35.953 } 00:19:35.953 }' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:35.953 pt2' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:35.953 [2024-12-09 23:01:11.301509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.953 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=603ccb99-af2b-4b44-afd6-5537a5145368 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 603ccb99-af2b-4b44-afd6-5537a5145368 ']' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 [2024-12-09 23:01:11.337181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.216 [2024-12-09 23:01:11.337220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.216 [2024-12-09 23:01:11.337321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.216 [2024-12-09 23:01:11.337375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.216 [2024-12-09 23:01:11.337388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 [2024-12-09 23:01:11.437244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:36.216 [2024-12-09 23:01:11.439646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:36.216 [2024-12-09 23:01:11.439740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:36.216 [2024-12-09 23:01:11.439818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:36.216 [2024-12-09 23:01:11.439833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.216 [2024-12-09 23:01:11.439845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:36.216 request: 00:19:36.216 { 00:19:36.216 "name": "raid_bdev1", 00:19:36.216 "raid_level": "concat", 00:19:36.216 "base_bdevs": [ 00:19:36.216 "malloc1", 00:19:36.216 "malloc2" 00:19:36.216 ], 00:19:36.216 "strip_size_kb": 64, 00:19:36.216 "superblock": false, 00:19:36.216 "method": "bdev_raid_create", 00:19:36.216 "req_id": 1 00:19:36.216 } 00:19:36.216 Got JSON-RPC error response 00:19:36.216 response: 00:19:36.216 { 00:19:36.216 "code": -17, 00:19:36.216 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:36.216 } 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.216 [2024-12-09 23:01:11.477219] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.216 [2024-12-09 23:01:11.477431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.216 [2024-12-09 23:01:11.477477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:36.216 [2024-12-09 23:01:11.477490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.216 [2024-12-09 23:01:11.480165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.216 [2024-12-09 23:01:11.480217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.216 [2024-12-09 23:01:11.480324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:36.216 [2024-12-09 23:01:11.480388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.216 pt1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.216 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.217 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.217 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.217 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.217 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.217 "name": "raid_bdev1", 00:19:36.217 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:36.217 "strip_size_kb": 64, 00:19:36.217 "state": "configuring", 00:19:36.217 "raid_level": "concat", 00:19:36.217 "superblock": true, 00:19:36.217 "num_base_bdevs": 2, 00:19:36.217 "num_base_bdevs_discovered": 1, 00:19:36.217 "num_base_bdevs_operational": 2, 00:19:36.217 "base_bdevs_list": [ 00:19:36.217 { 00:19:36.217 "name": "pt1", 00:19:36.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.217 "is_configured": true, 00:19:36.217 "data_offset": 2048, 00:19:36.217 "data_size": 63488 00:19:36.217 }, 00:19:36.217 { 00:19:36.217 "name": null, 00:19:36.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.217 "is_configured": false, 00:19:36.217 "data_offset": 2048, 00:19:36.217 "data_size": 63488 00:19:36.217 } 00:19:36.217 ] 00:19:36.217 }' 00:19:36.217 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.217 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.479 [2024-12-09 23:01:11.805333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.479 [2024-12-09 23:01:11.805591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.479 [2024-12-09 23:01:11.805622] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:36.479 [2024-12-09 23:01:11.805634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.479 [2024-12-09 23:01:11.806171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.479 [2024-12-09 23:01:11.806201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.479 [2024-12-09 23:01:11.806300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.479 [2024-12-09 23:01:11.806332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.479 [2024-12-09 23:01:11.806459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:36.479 [2024-12-09 23:01:11.806478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:36.479 [2024-12-09 23:01:11.806762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:36.479 [2024-12-09 23:01:11.806906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:36.479 [2024-12-09 23:01:11.806915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:36.479 [2024-12-09 23:01:11.807057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.479 pt2 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.479 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.740 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.740 "name": "raid_bdev1", 00:19:36.740 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:36.740 "strip_size_kb": 64, 00:19:36.740 "state": "online", 00:19:36.740 "raid_level": "concat", 00:19:36.740 "superblock": true, 00:19:36.740 "num_base_bdevs": 2, 00:19:36.740 "num_base_bdevs_discovered": 2, 00:19:36.740 "num_base_bdevs_operational": 2, 00:19:36.740 "base_bdevs_list": [ 00:19:36.740 { 00:19:36.740 "name": "pt1", 00:19:36.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.740 "is_configured": true, 00:19:36.740 "data_offset": 2048, 00:19:36.740 "data_size": 63488 00:19:36.740 }, 00:19:36.740 { 00:19:36.740 "name": "pt2", 00:19:36.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.740 "is_configured": true, 00:19:36.740 "data_offset": 2048, 00:19:36.740 "data_size": 63488 00:19:36.740 } 00:19:36.740 ] 00:19:36.740 }' 00:19:36.740 23:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.740 23:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 [2024-12-09 23:01:12.133732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.002 "name": "raid_bdev1", 00:19:37.002 "aliases": [ 00:19:37.002 "603ccb99-af2b-4b44-afd6-5537a5145368" 00:19:37.002 ], 00:19:37.002 "product_name": "Raid Volume", 00:19:37.002 "block_size": 512, 00:19:37.002 "num_blocks": 126976, 00:19:37.002 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:37.002 "assigned_rate_limits": { 00:19:37.002 "rw_ios_per_sec": 0, 00:19:37.002 "rw_mbytes_per_sec": 0, 00:19:37.002 "r_mbytes_per_sec": 0, 00:19:37.002 "w_mbytes_per_sec": 0 00:19:37.002 }, 00:19:37.002 "claimed": false, 00:19:37.002 "zoned": false, 00:19:37.002 "supported_io_types": { 00:19:37.002 "read": true, 00:19:37.002 "write": true, 00:19:37.002 "unmap": true, 00:19:37.002 "flush": true, 00:19:37.002 "reset": true, 00:19:37.002 "nvme_admin": false, 00:19:37.002 "nvme_io": false, 00:19:37.002 "nvme_io_md": false, 00:19:37.002 "write_zeroes": true, 00:19:37.002 "zcopy": false, 00:19:37.002 "get_zone_info": false, 00:19:37.002 "zone_management": false, 00:19:37.002 "zone_append": false, 00:19:37.002 "compare": false, 00:19:37.002 "compare_and_write": false, 00:19:37.002 "abort": false, 00:19:37.002 "seek_hole": false, 00:19:37.002 "seek_data": false, 00:19:37.002 "copy": false, 00:19:37.002 "nvme_iov_md": false 00:19:37.002 }, 00:19:37.002 "memory_domains": [ 00:19:37.002 { 00:19:37.002 "dma_device_id": "system", 00:19:37.002 "dma_device_type": 1 00:19:37.002 }, 00:19:37.002 { 00:19:37.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.002 "dma_device_type": 2 00:19:37.002 }, 00:19:37.002 { 00:19:37.002 "dma_device_id": "system", 00:19:37.002 "dma_device_type": 1 00:19:37.002 }, 00:19:37.002 { 00:19:37.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.002 "dma_device_type": 2 00:19:37.002 } 00:19:37.002 ], 00:19:37.002 "driver_specific": { 00:19:37.002 "raid": { 00:19:37.002 "uuid": "603ccb99-af2b-4b44-afd6-5537a5145368", 00:19:37.002 "strip_size_kb": 64, 00:19:37.002 "state": "online", 00:19:37.002 "raid_level": "concat", 00:19:37.002 "superblock": true, 00:19:37.002 "num_base_bdevs": 2, 00:19:37.002 "num_base_bdevs_discovered": 2, 00:19:37.002 "num_base_bdevs_operational": 2, 00:19:37.002 "base_bdevs_list": [ 00:19:37.002 { 00:19:37.002 "name": "pt1", 00:19:37.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.002 "is_configured": true, 00:19:37.002 "data_offset": 2048, 00:19:37.002 "data_size": 63488 00:19:37.002 }, 00:19:37.002 { 00:19:37.002 "name": "pt2", 00:19:37.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.002 "is_configured": true, 00:19:37.002 "data_offset": 2048, 00:19:37.002 "data_size": 63488 00:19:37.002 } 00:19:37.002 ] 00:19:37.002 } 00:19:37.002 } 00:19:37.002 }' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:37.002 pt2' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:37.002 [2024-12-09 23:01:12.305774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 603ccb99-af2b-4b44-afd6-5537a5145368 '!=' 603ccb99-af2b-4b44-afd6-5537a5145368 ']' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60780 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60780 ']' 00:19:37.002 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60780 00:19:37.003 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:37.003 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.003 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60780 00:19:37.316 killing process with pid 60780 00:19:37.316 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.316 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.316 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60780' 00:19:37.316 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60780 00:19:37.316 [2024-12-09 23:01:12.365619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.316 23:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60780 00:19:37.316 [2024-12-09 23:01:12.365729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.316 [2024-12-09 23:01:12.365787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.316 [2024-12-09 23:01:12.365800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:37.316 [2024-12-09 23:01:12.512604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.260 ************************************ 00:19:38.260 END TEST raid_superblock_test 00:19:38.260 ************************************ 00:19:38.260 23:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:38.260 00:19:38.260 real 0m3.566s 00:19:38.260 user 0m4.842s 00:19:38.260 sys 0m0.659s 00:19:38.260 23:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.260 23:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.260 23:01:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:19:38.260 23:01:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:38.260 23:01:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.260 23:01:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.260 ************************************ 00:19:38.260 START TEST raid_read_error_test 00:19:38.260 ************************************ 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:38.260 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ru9YOokfCO 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60981 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60981 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 60981 ']' 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.261 23:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.261 [2024-12-09 23:01:13.490657] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:38.261 [2024-12-09 23:01:13.490807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:19:38.522 [2024-12-09 23:01:13.658015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.522 [2024-12-09 23:01:13.802295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.784 [2024-12-09 23:01:13.969836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.784 [2024-12-09 23:01:13.969897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 BaseBdev1_malloc 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 true 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 [2024-12-09 23:01:14.489739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:39.356 [2024-12-09 23:01:14.489820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.356 [2024-12-09 23:01:14.489848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:39.356 [2024-12-09 23:01:14.489862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.356 [2024-12-09 23:01:14.492465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.356 [2024-12-09 23:01:14.492532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.356 BaseBdev1 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 BaseBdev2_malloc 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 true 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 [2024-12-09 23:01:14.545113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:39.356 [2024-12-09 23:01:14.545342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.356 [2024-12-09 23:01:14.545391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:39.356 [2024-12-09 23:01:14.545457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.356 [2024-12-09 23:01:14.547949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.356 [2024-12-09 23:01:14.548052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:39.356 BaseBdev2 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.356 [2024-12-09 23:01:14.553206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.356 [2024-12-09 23:01:14.555501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.356 [2024-12-09 23:01:14.555904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:39.356 [2024-12-09 23:01:14.555955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:39.356 [2024-12-09 23:01:14.556341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:39.356 [2024-12-09 23:01:14.556559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:39.356 [2024-12-09 23:01:14.556595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:39.356 [2024-12-09 23:01:14.556795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:39.356 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.357 "name": "raid_bdev1", 00:19:39.357 "uuid": "fc147a7e-a811-48ad-90c0-f47a87a2f0df", 00:19:39.357 "strip_size_kb": 64, 00:19:39.357 "state": "online", 00:19:39.357 "raid_level": "concat", 00:19:39.357 "superblock": true, 00:19:39.357 "num_base_bdevs": 2, 00:19:39.357 "num_base_bdevs_discovered": 2, 00:19:39.357 "num_base_bdevs_operational": 2, 00:19:39.357 "base_bdevs_list": [ 00:19:39.357 { 00:19:39.357 "name": "BaseBdev1", 00:19:39.357 "uuid": "3bfc0cb2-7ceb-517c-9c39-4c81da63222d", 00:19:39.357 "is_configured": true, 00:19:39.357 "data_offset": 2048, 00:19:39.357 "data_size": 63488 00:19:39.357 }, 00:19:39.357 { 00:19:39.357 "name": "BaseBdev2", 00:19:39.357 "uuid": "e512df39-47f6-5a90-9dd8-c222e283e2d5", 00:19:39.357 "is_configured": true, 00:19:39.357 "data_offset": 2048, 00:19:39.357 "data_size": 63488 00:19:39.357 } 00:19:39.357 ] 00:19:39.357 }' 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.357 23:01:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.619 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:39.619 23:01:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:39.898 [2024-12-09 23:01:15.026427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.842 "name": "raid_bdev1", 00:19:40.842 "uuid": "fc147a7e-a811-48ad-90c0-f47a87a2f0df", 00:19:40.842 "strip_size_kb": 64, 00:19:40.842 "state": "online", 00:19:40.842 "raid_level": "concat", 00:19:40.842 "superblock": true, 00:19:40.842 "num_base_bdevs": 2, 00:19:40.842 "num_base_bdevs_discovered": 2, 00:19:40.842 "num_base_bdevs_operational": 2, 00:19:40.842 "base_bdevs_list": [ 00:19:40.842 { 00:19:40.842 "name": "BaseBdev1", 00:19:40.842 "uuid": "3bfc0cb2-7ceb-517c-9c39-4c81da63222d", 00:19:40.842 "is_configured": true, 00:19:40.842 "data_offset": 2048, 00:19:40.842 "data_size": 63488 00:19:40.842 }, 00:19:40.842 { 00:19:40.842 "name": "BaseBdev2", 00:19:40.842 "uuid": "e512df39-47f6-5a90-9dd8-c222e283e2d5", 00:19:40.842 "is_configured": true, 00:19:40.842 "data_offset": 2048, 00:19:40.842 "data_size": 63488 00:19:40.842 } 00:19:40.842 ] 00:19:40.842 }' 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.842 23:01:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.103 23:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.103 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.103 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.103 [2024-12-09 23:01:16.270480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.103 [2024-12-09 23:01:16.270528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.103 [2024-12-09 23:01:16.273753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.103 [2024-12-09 23:01:16.273810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.103 [2024-12-09 23:01:16.273847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.103 [2024-12-09 23:01:16.273863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:41.103 { 00:19:41.103 "results": [ 00:19:41.103 { 00:19:41.103 "job": "raid_bdev1", 00:19:41.103 "core_mask": "0x1", 00:19:41.103 "workload": "randrw", 00:19:41.103 "percentage": 50, 00:19:41.103 "status": "finished", 00:19:41.103 "queue_depth": 1, 00:19:41.103 "io_size": 131072, 00:19:41.103 "runtime": 1.241788, 00:19:41.103 "iops": 12154.248551282506, 00:19:41.104 "mibps": 1519.2810689103133, 00:19:41.104 "io_failed": 1, 00:19:41.104 "io_timeout": 0, 00:19:41.104 "avg_latency_us": 113.82131667193279, 00:19:41.104 "min_latency_us": 34.067692307692305, 00:19:41.104 "max_latency_us": 1751.8276923076924 00:19:41.104 } 00:19:41.104 ], 00:19:41.104 "core_count": 1 00:19:41.104 } 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60981 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 60981 ']' 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 60981 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60981 00:19:41.104 killing process with pid 60981 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60981' 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 60981 00:19:41.104 23:01:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 60981 00:19:41.104 [2024-12-09 23:01:16.305910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.104 [2024-12-09 23:01:16.402661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ru9YOokfCO 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:42.044 ************************************ 00:19:42.044 END TEST raid_read_error_test 00:19:42.044 ************************************ 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:19:42.044 00:19:42.044 real 0m3.866s 00:19:42.044 user 0m4.567s 00:19:42.044 sys 0m0.537s 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.044 23:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.044 23:01:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:19:42.044 23:01:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:42.044 23:01:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.044 23:01:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.044 ************************************ 00:19:42.044 START TEST raid_write_error_test 00:19:42.044 ************************************ 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FyXg3Szo1J 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61121 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61121 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61121 ']' 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.044 23:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.305 [2024-12-09 23:01:17.433641] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:42.305 [2024-12-09 23:01:17.433838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61121 ] 00:19:42.305 [2024-12-09 23:01:17.593230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.566 [2024-12-09 23:01:17.745936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.566 [2024-12-09 23:01:17.917479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.566 [2024-12-09 23:01:17.917543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.139 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 BaseBdev1_malloc 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 true 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 [2024-12-09 23:01:18.377831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:43.140 [2024-12-09 23:01:18.378084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.140 [2024-12-09 23:01:18.378134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:43.140 [2024-12-09 23:01:18.378148] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.140 [2024-12-09 23:01:18.380735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.140 [2024-12-09 23:01:18.380796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.140 BaseBdev1 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 BaseBdev2_malloc 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 true 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 [2024-12-09 23:01:18.427568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:43.140 [2024-12-09 23:01:18.427791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.140 [2024-12-09 23:01:18.427819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:43.140 [2024-12-09 23:01:18.427831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.140 [2024-12-09 23:01:18.430344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.140 [2024-12-09 23:01:18.430399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:43.140 BaseBdev2 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 [2024-12-09 23:01:18.435654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.140 [2024-12-09 23:01:18.437884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.140 [2024-12-09 23:01:18.438138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:43.140 [2024-12-09 23:01:18.438158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:43.140 [2024-12-09 23:01:18.438454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:43.140 [2024-12-09 23:01:18.438625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:43.140 [2024-12-09 23:01:18.438636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:43.140 [2024-12-09 23:01:18.438798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.140 "name": "raid_bdev1", 00:19:43.140 "uuid": "69999e4e-640e-444b-97e6-b57f596d630a", 00:19:43.140 "strip_size_kb": 64, 00:19:43.140 "state": "online", 00:19:43.140 "raid_level": "concat", 00:19:43.140 "superblock": true, 00:19:43.140 "num_base_bdevs": 2, 00:19:43.140 "num_base_bdevs_discovered": 2, 00:19:43.140 "num_base_bdevs_operational": 2, 00:19:43.140 "base_bdevs_list": [ 00:19:43.140 { 00:19:43.140 "name": "BaseBdev1", 00:19:43.140 "uuid": "a40b2708-255f-5d7b-9f5e-13f1e57efa45", 00:19:43.140 "is_configured": true, 00:19:43.140 "data_offset": 2048, 00:19:43.140 "data_size": 63488 00:19:43.140 }, 00:19:43.140 { 00:19:43.140 "name": "BaseBdev2", 00:19:43.140 "uuid": "cffff0d0-de5b-5467-9750-811b1ab04607", 00:19:43.140 "is_configured": true, 00:19:43.140 "data_offset": 2048, 00:19:43.140 "data_size": 63488 00:19:43.140 } 00:19:43.140 ] 00:19:43.140 }' 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.140 23:01:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.401 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:43.401 23:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:43.660 [2024-12-09 23:01:18.828846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.601 "name": "raid_bdev1", 00:19:44.601 "uuid": "69999e4e-640e-444b-97e6-b57f596d630a", 00:19:44.601 "strip_size_kb": 64, 00:19:44.601 "state": "online", 00:19:44.601 "raid_level": "concat", 00:19:44.601 "superblock": true, 00:19:44.601 "num_base_bdevs": 2, 00:19:44.601 "num_base_bdevs_discovered": 2, 00:19:44.601 "num_base_bdevs_operational": 2, 00:19:44.601 "base_bdevs_list": [ 00:19:44.601 { 00:19:44.601 "name": "BaseBdev1", 00:19:44.601 "uuid": "a40b2708-255f-5d7b-9f5e-13f1e57efa45", 00:19:44.601 "is_configured": true, 00:19:44.601 "data_offset": 2048, 00:19:44.601 "data_size": 63488 00:19:44.601 }, 00:19:44.601 { 00:19:44.601 "name": "BaseBdev2", 00:19:44.601 "uuid": "cffff0d0-de5b-5467-9750-811b1ab04607", 00:19:44.601 "is_configured": true, 00:19:44.601 "data_offset": 2048, 00:19:44.601 "data_size": 63488 00:19:44.601 } 00:19:44.601 ] 00:19:44.601 }' 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.601 23:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.862 23:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.862 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.862 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.862 [2024-12-09 23:01:20.096400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.862 [2024-12-09 23:01:20.096622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.862 [2024-12-09 23:01:20.099945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.862 { 00:19:44.862 "results": [ 00:19:44.862 { 00:19:44.862 "job": "raid_bdev1", 00:19:44.862 "core_mask": "0x1", 00:19:44.862 "workload": "randrw", 00:19:44.862 "percentage": 50, 00:19:44.862 "status": "finished", 00:19:44.862 "queue_depth": 1, 00:19:44.862 "io_size": 131072, 00:19:44.862 "runtime": 1.265501, 00:19:44.862 "iops": 12286.833435927747, 00:19:44.862 "mibps": 1535.8541794909684, 00:19:44.862 "io_failed": 1, 00:19:44.862 "io_timeout": 0, 00:19:44.862 "avg_latency_us": 112.75974316101905, 00:19:44.862 "min_latency_us": 34.26461538461538, 00:19:44.862 "max_latency_us": 1739.2246153846154 00:19:44.862 } 00:19:44.863 ], 00:19:44.863 "core_count": 1 00:19:44.863 } 00:19:44.863 [2024-12-09 23:01:20.100182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.863 [2024-12-09 23:01:20.100237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.863 [2024-12-09 23:01:20.100250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61121 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61121 ']' 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61121 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61121 00:19:44.863 killing process with pid 61121 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61121' 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61121 00:19:44.863 23:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61121 00:19:44.863 [2024-12-09 23:01:20.133259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.122 [2024-12-09 23:01:20.233020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FyXg3Szo1J 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:46.070 ************************************ 00:19:46.070 END TEST raid_write_error_test 00:19:46.070 ************************************ 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:19:46.070 00:19:46.070 real 0m3.778s 00:19:46.070 user 0m4.386s 00:19:46.070 sys 0m0.517s 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.070 23:01:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.070 23:01:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:46.070 23:01:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:19:46.070 23:01:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:46.070 23:01:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.070 23:01:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.070 ************************************ 00:19:46.070 START TEST raid_state_function_test 00:19:46.070 ************************************ 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:46.070 Process raid pid: 61255 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61255 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61255' 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61255 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61255 ']' 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.070 23:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.070 [2024-12-09 23:01:21.266946] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:46.070 [2024-12-09 23:01:21.267132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.330 [2024-12-09 23:01:21.430425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.330 [2024-12-09 23:01:21.573736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.590 [2024-12-09 23:01:21.744517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.590 [2024-12-09 23:01:21.744576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 [2024-12-09 23:01:22.155195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.851 [2024-12-09 23:01:22.155267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.851 [2024-12-09 23:01:22.155279] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.851 [2024-12-09 23:01:22.155289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.851 "name": "Existed_Raid", 00:19:46.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.851 "strip_size_kb": 0, 00:19:46.851 "state": "configuring", 00:19:46.851 "raid_level": "raid1", 00:19:46.851 "superblock": false, 00:19:46.851 "num_base_bdevs": 2, 00:19:46.851 "num_base_bdevs_discovered": 0, 00:19:46.851 "num_base_bdevs_operational": 2, 00:19:46.851 "base_bdevs_list": [ 00:19:46.851 { 00:19:46.851 "name": "BaseBdev1", 00:19:46.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.851 "is_configured": false, 00:19:46.851 "data_offset": 0, 00:19:46.851 "data_size": 0 00:19:46.851 }, 00:19:46.851 { 00:19:46.851 "name": "BaseBdev2", 00:19:46.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.851 "is_configured": false, 00:19:46.851 "data_offset": 0, 00:19:46.851 "data_size": 0 00:19:46.851 } 00:19:46.851 ] 00:19:46.851 }' 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.851 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 [2024-12-09 23:01:22.487234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.423 [2024-12-09 23:01:22.487277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 [2024-12-09 23:01:22.495208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.423 [2024-12-09 23:01:22.495410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.423 [2024-12-09 23:01:22.495837] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.423 [2024-12-09 23:01:22.495888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 [2024-12-09 23:01:22.533741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.423 BaseBdev1 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.423 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.423 [ 00:19:47.423 { 00:19:47.423 "name": "BaseBdev1", 00:19:47.423 "aliases": [ 00:19:47.423 "292bf459-a913-4c18-80a1-2ce7bc5ab54f" 00:19:47.423 ], 00:19:47.423 "product_name": "Malloc disk", 00:19:47.423 "block_size": 512, 00:19:47.423 "num_blocks": 65536, 00:19:47.423 "uuid": "292bf459-a913-4c18-80a1-2ce7bc5ab54f", 00:19:47.423 "assigned_rate_limits": { 00:19:47.423 "rw_ios_per_sec": 0, 00:19:47.423 "rw_mbytes_per_sec": 0, 00:19:47.423 "r_mbytes_per_sec": 0, 00:19:47.423 "w_mbytes_per_sec": 0 00:19:47.423 }, 00:19:47.423 "claimed": true, 00:19:47.423 "claim_type": "exclusive_write", 00:19:47.423 "zoned": false, 00:19:47.423 "supported_io_types": { 00:19:47.423 "read": true, 00:19:47.423 "write": true, 00:19:47.423 "unmap": true, 00:19:47.423 "flush": true, 00:19:47.423 "reset": true, 00:19:47.423 "nvme_admin": false, 00:19:47.423 "nvme_io": false, 00:19:47.423 "nvme_io_md": false, 00:19:47.423 "write_zeroes": true, 00:19:47.423 "zcopy": true, 00:19:47.423 "get_zone_info": false, 00:19:47.423 "zone_management": false, 00:19:47.423 "zone_append": false, 00:19:47.423 "compare": false, 00:19:47.423 "compare_and_write": false, 00:19:47.424 "abort": true, 00:19:47.424 "seek_hole": false, 00:19:47.424 "seek_data": false, 00:19:47.424 "copy": true, 00:19:47.424 "nvme_iov_md": false 00:19:47.424 }, 00:19:47.424 "memory_domains": [ 00:19:47.424 { 00:19:47.424 "dma_device_id": "system", 00:19:47.424 "dma_device_type": 1 00:19:47.424 }, 00:19:47.424 { 00:19:47.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.424 "dma_device_type": 2 00:19:47.424 } 00:19:47.424 ], 00:19:47.424 "driver_specific": {} 00:19:47.424 } 00:19:47.424 ] 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.424 "name": "Existed_Raid", 00:19:47.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.424 "strip_size_kb": 0, 00:19:47.424 "state": "configuring", 00:19:47.424 "raid_level": "raid1", 00:19:47.424 "superblock": false, 00:19:47.424 "num_base_bdevs": 2, 00:19:47.424 "num_base_bdevs_discovered": 1, 00:19:47.424 "num_base_bdevs_operational": 2, 00:19:47.424 "base_bdevs_list": [ 00:19:47.424 { 00:19:47.424 "name": "BaseBdev1", 00:19:47.424 "uuid": "292bf459-a913-4c18-80a1-2ce7bc5ab54f", 00:19:47.424 "is_configured": true, 00:19:47.424 "data_offset": 0, 00:19:47.424 "data_size": 65536 00:19:47.424 }, 00:19:47.424 { 00:19:47.424 "name": "BaseBdev2", 00:19:47.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.424 "is_configured": false, 00:19:47.424 "data_offset": 0, 00:19:47.424 "data_size": 0 00:19:47.424 } 00:19:47.424 ] 00:19:47.424 }' 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.424 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.686 [2024-12-09 23:01:22.921888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.686 [2024-12-09 23:01:22.922125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.686 [2024-12-09 23:01:22.933934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.686 [2024-12-09 23:01:22.936315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.686 [2024-12-09 23:01:22.936492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.686 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.686 "name": "Existed_Raid", 00:19:47.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.686 "strip_size_kb": 0, 00:19:47.686 "state": "configuring", 00:19:47.686 "raid_level": "raid1", 00:19:47.686 "superblock": false, 00:19:47.686 "num_base_bdevs": 2, 00:19:47.686 "num_base_bdevs_discovered": 1, 00:19:47.686 "num_base_bdevs_operational": 2, 00:19:47.686 "base_bdevs_list": [ 00:19:47.686 { 00:19:47.686 "name": "BaseBdev1", 00:19:47.686 "uuid": "292bf459-a913-4c18-80a1-2ce7bc5ab54f", 00:19:47.686 "is_configured": true, 00:19:47.686 "data_offset": 0, 00:19:47.686 "data_size": 65536 00:19:47.686 }, 00:19:47.686 { 00:19:47.686 "name": "BaseBdev2", 00:19:47.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.687 "is_configured": false, 00:19:47.687 "data_offset": 0, 00:19:47.687 "data_size": 0 00:19:47.687 } 00:19:47.687 ] 00:19:47.687 }' 00:19:47.687 23:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.687 23:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.947 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:47.947 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.947 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.232 [2024-12-09 23:01:23.310728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:48.232 [2024-12-09 23:01:23.310996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:48.232 [2024-12-09 23:01:23.311016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:48.232 [2024-12-09 23:01:23.311371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:48.232 [2024-12-09 23:01:23.311581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:48.232 [2024-12-09 23:01:23.311595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:48.232 [2024-12-09 23:01:23.311890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.232 BaseBdev2 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.232 [ 00:19:48.232 { 00:19:48.232 "name": "BaseBdev2", 00:19:48.232 "aliases": [ 00:19:48.232 "fe6764bb-5850-4337-8fc7-e50a38a6002b" 00:19:48.232 ], 00:19:48.232 "product_name": "Malloc disk", 00:19:48.232 "block_size": 512, 00:19:48.232 "num_blocks": 65536, 00:19:48.232 "uuid": "fe6764bb-5850-4337-8fc7-e50a38a6002b", 00:19:48.232 "assigned_rate_limits": { 00:19:48.232 "rw_ios_per_sec": 0, 00:19:48.232 "rw_mbytes_per_sec": 0, 00:19:48.232 "r_mbytes_per_sec": 0, 00:19:48.232 "w_mbytes_per_sec": 0 00:19:48.232 }, 00:19:48.232 "claimed": true, 00:19:48.232 "claim_type": "exclusive_write", 00:19:48.232 "zoned": false, 00:19:48.232 "supported_io_types": { 00:19:48.232 "read": true, 00:19:48.232 "write": true, 00:19:48.232 "unmap": true, 00:19:48.232 "flush": true, 00:19:48.232 "reset": true, 00:19:48.232 "nvme_admin": false, 00:19:48.232 "nvme_io": false, 00:19:48.232 "nvme_io_md": false, 00:19:48.232 "write_zeroes": true, 00:19:48.232 "zcopy": true, 00:19:48.232 "get_zone_info": false, 00:19:48.232 "zone_management": false, 00:19:48.232 "zone_append": false, 00:19:48.232 "compare": false, 00:19:48.232 "compare_and_write": false, 00:19:48.232 "abort": true, 00:19:48.232 "seek_hole": false, 00:19:48.232 "seek_data": false, 00:19:48.232 "copy": true, 00:19:48.232 "nvme_iov_md": false 00:19:48.232 }, 00:19:48.232 "memory_domains": [ 00:19:48.232 { 00:19:48.232 "dma_device_id": "system", 00:19:48.232 "dma_device_type": 1 00:19:48.232 }, 00:19:48.232 { 00:19:48.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.232 "dma_device_type": 2 00:19:48.232 } 00:19:48.232 ], 00:19:48.232 "driver_specific": {} 00:19:48.232 } 00:19:48.232 ] 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:48.232 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.233 "name": "Existed_Raid", 00:19:48.233 "uuid": "92e764a5-cb8c-419a-90fa-883a1ae0b583", 00:19:48.233 "strip_size_kb": 0, 00:19:48.233 "state": "online", 00:19:48.233 "raid_level": "raid1", 00:19:48.233 "superblock": false, 00:19:48.233 "num_base_bdevs": 2, 00:19:48.233 "num_base_bdevs_discovered": 2, 00:19:48.233 "num_base_bdevs_operational": 2, 00:19:48.233 "base_bdevs_list": [ 00:19:48.233 { 00:19:48.233 "name": "BaseBdev1", 00:19:48.233 "uuid": "292bf459-a913-4c18-80a1-2ce7bc5ab54f", 00:19:48.233 "is_configured": true, 00:19:48.233 "data_offset": 0, 00:19:48.233 "data_size": 65536 00:19:48.233 }, 00:19:48.233 { 00:19:48.233 "name": "BaseBdev2", 00:19:48.233 "uuid": "fe6764bb-5850-4337-8fc7-e50a38a6002b", 00:19:48.233 "is_configured": true, 00:19:48.233 "data_offset": 0, 00:19:48.233 "data_size": 65536 00:19:48.233 } 00:19:48.233 ] 00:19:48.233 }' 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.233 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 [2024-12-09 23:01:23.671271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:48.493 "name": "Existed_Raid", 00:19:48.493 "aliases": [ 00:19:48.493 "92e764a5-cb8c-419a-90fa-883a1ae0b583" 00:19:48.493 ], 00:19:48.493 "product_name": "Raid Volume", 00:19:48.493 "block_size": 512, 00:19:48.493 "num_blocks": 65536, 00:19:48.493 "uuid": "92e764a5-cb8c-419a-90fa-883a1ae0b583", 00:19:48.493 "assigned_rate_limits": { 00:19:48.493 "rw_ios_per_sec": 0, 00:19:48.493 "rw_mbytes_per_sec": 0, 00:19:48.493 "r_mbytes_per_sec": 0, 00:19:48.493 "w_mbytes_per_sec": 0 00:19:48.493 }, 00:19:48.493 "claimed": false, 00:19:48.493 "zoned": false, 00:19:48.493 "supported_io_types": { 00:19:48.493 "read": true, 00:19:48.493 "write": true, 00:19:48.493 "unmap": false, 00:19:48.493 "flush": false, 00:19:48.493 "reset": true, 00:19:48.493 "nvme_admin": false, 00:19:48.493 "nvme_io": false, 00:19:48.493 "nvme_io_md": false, 00:19:48.493 "write_zeroes": true, 00:19:48.493 "zcopy": false, 00:19:48.493 "get_zone_info": false, 00:19:48.493 "zone_management": false, 00:19:48.493 "zone_append": false, 00:19:48.493 "compare": false, 00:19:48.493 "compare_and_write": false, 00:19:48.493 "abort": false, 00:19:48.493 "seek_hole": false, 00:19:48.493 "seek_data": false, 00:19:48.493 "copy": false, 00:19:48.493 "nvme_iov_md": false 00:19:48.493 }, 00:19:48.493 "memory_domains": [ 00:19:48.493 { 00:19:48.493 "dma_device_id": "system", 00:19:48.493 "dma_device_type": 1 00:19:48.493 }, 00:19:48.493 { 00:19:48.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.493 "dma_device_type": 2 00:19:48.493 }, 00:19:48.493 { 00:19:48.493 "dma_device_id": "system", 00:19:48.493 "dma_device_type": 1 00:19:48.493 }, 00:19:48.493 { 00:19:48.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.493 "dma_device_type": 2 00:19:48.493 } 00:19:48.493 ], 00:19:48.493 "driver_specific": { 00:19:48.493 "raid": { 00:19:48.493 "uuid": "92e764a5-cb8c-419a-90fa-883a1ae0b583", 00:19:48.493 "strip_size_kb": 0, 00:19:48.493 "state": "online", 00:19:48.493 "raid_level": "raid1", 00:19:48.493 "superblock": false, 00:19:48.493 "num_base_bdevs": 2, 00:19:48.493 "num_base_bdevs_discovered": 2, 00:19:48.493 "num_base_bdevs_operational": 2, 00:19:48.493 "base_bdevs_list": [ 00:19:48.493 { 00:19:48.493 "name": "BaseBdev1", 00:19:48.493 "uuid": "292bf459-a913-4c18-80a1-2ce7bc5ab54f", 00:19:48.493 "is_configured": true, 00:19:48.493 "data_offset": 0, 00:19:48.493 "data_size": 65536 00:19:48.493 }, 00:19:48.493 { 00:19:48.493 "name": "BaseBdev2", 00:19:48.493 "uuid": "fe6764bb-5850-4337-8fc7-e50a38a6002b", 00:19:48.493 "is_configured": true, 00:19:48.493 "data_offset": 0, 00:19:48.493 "data_size": 65536 00:19:48.493 } 00:19:48.493 ] 00:19:48.493 } 00:19:48.493 } 00:19:48.493 }' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:48.493 BaseBdev2' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.493 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.493 [2024-12-09 23:01:23.826998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.754 "name": "Existed_Raid", 00:19:48.754 "uuid": "92e764a5-cb8c-419a-90fa-883a1ae0b583", 00:19:48.754 "strip_size_kb": 0, 00:19:48.754 "state": "online", 00:19:48.754 "raid_level": "raid1", 00:19:48.754 "superblock": false, 00:19:48.754 "num_base_bdevs": 2, 00:19:48.754 "num_base_bdevs_discovered": 1, 00:19:48.754 "num_base_bdevs_operational": 1, 00:19:48.754 "base_bdevs_list": [ 00:19:48.754 { 00:19:48.754 "name": null, 00:19:48.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.754 "is_configured": false, 00:19:48.754 "data_offset": 0, 00:19:48.754 "data_size": 65536 00:19:48.754 }, 00:19:48.754 { 00:19:48.754 "name": "BaseBdev2", 00:19:48.754 "uuid": "fe6764bb-5850-4337-8fc7-e50a38a6002b", 00:19:48.754 "is_configured": true, 00:19:48.754 "data_offset": 0, 00:19:48.754 "data_size": 65536 00:19:48.754 } 00:19:48.754 ] 00:19:48.754 }' 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.754 23:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.015 [2024-12-09 23:01:24.259958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.015 [2024-12-09 23:01:24.260125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.015 [2024-12-09 23:01:24.331763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.015 [2024-12-09 23:01:24.331837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.015 [2024-12-09 23:01:24.331851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61255 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61255 ']' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61255 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.015 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61255 00:19:49.276 killing process with pid 61255 00:19:49.276 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.276 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.276 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61255' 00:19:49.276 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61255 00:19:49.276 [2024-12-09 23:01:24.394681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.276 23:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61255 00:19:49.276 [2024-12-09 23:01:24.408218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:50.221 00:19:50.221 real 0m4.056s 00:19:50.221 user 0m5.679s 00:19:50.221 sys 0m0.729s 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.221 ************************************ 00:19:50.221 END TEST raid_state_function_test 00:19:50.221 ************************************ 00:19:50.221 23:01:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:19:50.221 23:01:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:50.221 23:01:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.221 23:01:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.221 ************************************ 00:19:50.221 START TEST raid_state_function_test_sb 00:19:50.221 ************************************ 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:50.221 Process raid pid: 61502 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61502 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61502' 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61502 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61502 ']' 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.221 23:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:50.221 [2024-12-09 23:01:25.405727] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:50.221 [2024-12-09 23:01:25.406159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.221 [2024-12-09 23:01:25.568629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.483 [2024-12-09 23:01:25.710726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.743 [2024-12-09 23:01:25.877249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.743 [2024-12-09 23:01:25.877297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.006 [2024-12-09 23:01:26.298698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.006 [2024-12-09 23:01:26.299020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.006 [2024-12-09 23:01:26.299046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.006 [2024-12-09 23:01:26.299059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.006 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.007 "name": "Existed_Raid", 00:19:51.007 "uuid": "c7c84730-1e55-4bc7-bd81-df2f68e49463", 00:19:51.007 "strip_size_kb": 0, 00:19:51.007 "state": "configuring", 00:19:51.007 "raid_level": "raid1", 00:19:51.007 "superblock": true, 00:19:51.007 "num_base_bdevs": 2, 00:19:51.007 "num_base_bdevs_discovered": 0, 00:19:51.007 "num_base_bdevs_operational": 2, 00:19:51.007 "base_bdevs_list": [ 00:19:51.007 { 00:19:51.007 "name": "BaseBdev1", 00:19:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.007 "is_configured": false, 00:19:51.007 "data_offset": 0, 00:19:51.007 "data_size": 0 00:19:51.007 }, 00:19:51.007 { 00:19:51.007 "name": "BaseBdev2", 00:19:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.007 "is_configured": false, 00:19:51.007 "data_offset": 0, 00:19:51.007 "data_size": 0 00:19:51.007 } 00:19:51.007 ] 00:19:51.007 }' 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.007 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.278 [2024-12-09 23:01:26.630705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.278 [2024-12-09 23:01:26.630752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.278 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.278 [2024-12-09 23:01:26.638688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.278 [2024-12-09 23:01:26.638906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.537 [2024-12-09 23:01:26.638987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.537 [2024-12-09 23:01:26.639009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.537 [2024-12-09 23:01:26.677424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.537 BaseBdev1 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.537 [ 00:19:51.537 { 00:19:51.537 "name": "BaseBdev1", 00:19:51.537 "aliases": [ 00:19:51.537 "a8626456-89fe-4a3f-a2fb-7ce762286a35" 00:19:51.537 ], 00:19:51.537 "product_name": "Malloc disk", 00:19:51.537 "block_size": 512, 00:19:51.537 "num_blocks": 65536, 00:19:51.537 "uuid": "a8626456-89fe-4a3f-a2fb-7ce762286a35", 00:19:51.537 "assigned_rate_limits": { 00:19:51.537 "rw_ios_per_sec": 0, 00:19:51.537 "rw_mbytes_per_sec": 0, 00:19:51.537 "r_mbytes_per_sec": 0, 00:19:51.537 "w_mbytes_per_sec": 0 00:19:51.537 }, 00:19:51.537 "claimed": true, 00:19:51.537 "claim_type": "exclusive_write", 00:19:51.537 "zoned": false, 00:19:51.537 "supported_io_types": { 00:19:51.537 "read": true, 00:19:51.537 "write": true, 00:19:51.537 "unmap": true, 00:19:51.537 "flush": true, 00:19:51.537 "reset": true, 00:19:51.537 "nvme_admin": false, 00:19:51.537 "nvme_io": false, 00:19:51.537 "nvme_io_md": false, 00:19:51.537 "write_zeroes": true, 00:19:51.537 "zcopy": true, 00:19:51.537 "get_zone_info": false, 00:19:51.537 "zone_management": false, 00:19:51.537 "zone_append": false, 00:19:51.537 "compare": false, 00:19:51.537 "compare_and_write": false, 00:19:51.537 "abort": true, 00:19:51.537 "seek_hole": false, 00:19:51.537 "seek_data": false, 00:19:51.537 "copy": true, 00:19:51.537 "nvme_iov_md": false 00:19:51.537 }, 00:19:51.537 "memory_domains": [ 00:19:51.537 { 00:19:51.537 "dma_device_id": "system", 00:19:51.537 "dma_device_type": 1 00:19:51.537 }, 00:19:51.537 { 00:19:51.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.537 "dma_device_type": 2 00:19:51.537 } 00:19:51.537 ], 00:19:51.537 "driver_specific": {} 00:19:51.537 } 00:19:51.537 ] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.537 "name": "Existed_Raid", 00:19:51.537 "uuid": "48df7913-9e22-4f5d-8546-5fb3d8e27f62", 00:19:51.537 "strip_size_kb": 0, 00:19:51.537 "state": "configuring", 00:19:51.537 "raid_level": "raid1", 00:19:51.537 "superblock": true, 00:19:51.537 "num_base_bdevs": 2, 00:19:51.537 "num_base_bdevs_discovered": 1, 00:19:51.537 "num_base_bdevs_operational": 2, 00:19:51.537 "base_bdevs_list": [ 00:19:51.537 { 00:19:51.537 "name": "BaseBdev1", 00:19:51.537 "uuid": "a8626456-89fe-4a3f-a2fb-7ce762286a35", 00:19:51.537 "is_configured": true, 00:19:51.537 "data_offset": 2048, 00:19:51.537 "data_size": 63488 00:19:51.537 }, 00:19:51.537 { 00:19:51.537 "name": "BaseBdev2", 00:19:51.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.537 "is_configured": false, 00:19:51.537 "data_offset": 0, 00:19:51.537 "data_size": 0 00:19:51.537 } 00:19:51.537 ] 00:19:51.537 }' 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.537 23:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.798 23:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.798 [2024-12-09 23:01:27.005548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.798 [2024-12-09 23:01:27.005773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.798 [2024-12-09 23:01:27.017599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.798 [2024-12-09 23:01:27.019932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.798 [2024-12-09 23:01:27.020148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.798 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.799 "name": "Existed_Raid", 00:19:51.799 "uuid": "ee6908de-9bc8-4a84-b311-dd3dba5e15b4", 00:19:51.799 "strip_size_kb": 0, 00:19:51.799 "state": "configuring", 00:19:51.799 "raid_level": "raid1", 00:19:51.799 "superblock": true, 00:19:51.799 "num_base_bdevs": 2, 00:19:51.799 "num_base_bdevs_discovered": 1, 00:19:51.799 "num_base_bdevs_operational": 2, 00:19:51.799 "base_bdevs_list": [ 00:19:51.799 { 00:19:51.799 "name": "BaseBdev1", 00:19:51.799 "uuid": "a8626456-89fe-4a3f-a2fb-7ce762286a35", 00:19:51.799 "is_configured": true, 00:19:51.799 "data_offset": 2048, 00:19:51.799 "data_size": 63488 00:19:51.799 }, 00:19:51.799 { 00:19:51.799 "name": "BaseBdev2", 00:19:51.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.799 "is_configured": false, 00:19:51.799 "data_offset": 0, 00:19:51.799 "data_size": 0 00:19:51.799 } 00:19:51.799 ] 00:19:51.799 }' 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.799 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:52.058 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.058 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 [2024-12-09 23:01:27.366202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.058 BaseBdev2 00:19:52.059 [2024-12-09 23:01:27.366692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:52.059 [2024-12-09 23:01:27.366719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:52.059 [2024-12-09 23:01:27.367030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:52.059 [2024-12-09 23:01:27.367236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:52.059 [2024-12-09 23:01:27.367252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:52.059 [2024-12-09 23:01:27.367404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.059 [ 00:19:52.059 { 00:19:52.059 "name": "BaseBdev2", 00:19:52.059 "aliases": [ 00:19:52.059 "a893322e-85c5-4651-9ca5-6469c7a32020" 00:19:52.059 ], 00:19:52.059 "product_name": "Malloc disk", 00:19:52.059 "block_size": 512, 00:19:52.059 "num_blocks": 65536, 00:19:52.059 "uuid": "a893322e-85c5-4651-9ca5-6469c7a32020", 00:19:52.059 "assigned_rate_limits": { 00:19:52.059 "rw_ios_per_sec": 0, 00:19:52.059 "rw_mbytes_per_sec": 0, 00:19:52.059 "r_mbytes_per_sec": 0, 00:19:52.059 "w_mbytes_per_sec": 0 00:19:52.059 }, 00:19:52.059 "claimed": true, 00:19:52.059 "claim_type": "exclusive_write", 00:19:52.059 "zoned": false, 00:19:52.059 "supported_io_types": { 00:19:52.059 "read": true, 00:19:52.059 "write": true, 00:19:52.059 "unmap": true, 00:19:52.059 "flush": true, 00:19:52.059 "reset": true, 00:19:52.059 "nvme_admin": false, 00:19:52.059 "nvme_io": false, 00:19:52.059 "nvme_io_md": false, 00:19:52.059 "write_zeroes": true, 00:19:52.059 "zcopy": true, 00:19:52.059 "get_zone_info": false, 00:19:52.059 "zone_management": false, 00:19:52.059 "zone_append": false, 00:19:52.059 "compare": false, 00:19:52.059 "compare_and_write": false, 00:19:52.059 "abort": true, 00:19:52.059 "seek_hole": false, 00:19:52.059 "seek_data": false, 00:19:52.059 "copy": true, 00:19:52.059 "nvme_iov_md": false 00:19:52.059 }, 00:19:52.059 "memory_domains": [ 00:19:52.059 { 00:19:52.059 "dma_device_id": "system", 00:19:52.059 "dma_device_type": 1 00:19:52.059 }, 00:19:52.059 { 00:19:52.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.059 "dma_device_type": 2 00:19:52.059 } 00:19:52.059 ], 00:19:52.059 "driver_specific": {} 00:19:52.059 } 00:19:52.059 ] 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.059 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.320 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.320 "name": "Existed_Raid", 00:19:52.320 "uuid": "ee6908de-9bc8-4a84-b311-dd3dba5e15b4", 00:19:52.320 "strip_size_kb": 0, 00:19:52.320 "state": "online", 00:19:52.320 "raid_level": "raid1", 00:19:52.320 "superblock": true, 00:19:52.320 "num_base_bdevs": 2, 00:19:52.320 "num_base_bdevs_discovered": 2, 00:19:52.320 "num_base_bdevs_operational": 2, 00:19:52.320 "base_bdevs_list": [ 00:19:52.320 { 00:19:52.320 "name": "BaseBdev1", 00:19:52.320 "uuid": "a8626456-89fe-4a3f-a2fb-7ce762286a35", 00:19:52.320 "is_configured": true, 00:19:52.320 "data_offset": 2048, 00:19:52.320 "data_size": 63488 00:19:52.320 }, 00:19:52.320 { 00:19:52.320 "name": "BaseBdev2", 00:19:52.320 "uuid": "a893322e-85c5-4651-9ca5-6469c7a32020", 00:19:52.320 "is_configured": true, 00:19:52.320 "data_offset": 2048, 00:19:52.320 "data_size": 63488 00:19:52.320 } 00:19:52.320 ] 00:19:52.320 }' 00:19:52.320 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.320 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:52.582 [2024-12-09 23:01:27.718652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:52.582 "name": "Existed_Raid", 00:19:52.582 "aliases": [ 00:19:52.582 "ee6908de-9bc8-4a84-b311-dd3dba5e15b4" 00:19:52.582 ], 00:19:52.582 "product_name": "Raid Volume", 00:19:52.582 "block_size": 512, 00:19:52.582 "num_blocks": 63488, 00:19:52.582 "uuid": "ee6908de-9bc8-4a84-b311-dd3dba5e15b4", 00:19:52.582 "assigned_rate_limits": { 00:19:52.582 "rw_ios_per_sec": 0, 00:19:52.582 "rw_mbytes_per_sec": 0, 00:19:52.582 "r_mbytes_per_sec": 0, 00:19:52.582 "w_mbytes_per_sec": 0 00:19:52.582 }, 00:19:52.582 "claimed": false, 00:19:52.582 "zoned": false, 00:19:52.582 "supported_io_types": { 00:19:52.582 "read": true, 00:19:52.582 "write": true, 00:19:52.582 "unmap": false, 00:19:52.582 "flush": false, 00:19:52.582 "reset": true, 00:19:52.582 "nvme_admin": false, 00:19:52.582 "nvme_io": false, 00:19:52.582 "nvme_io_md": false, 00:19:52.582 "write_zeroes": true, 00:19:52.582 "zcopy": false, 00:19:52.582 "get_zone_info": false, 00:19:52.582 "zone_management": false, 00:19:52.582 "zone_append": false, 00:19:52.582 "compare": false, 00:19:52.582 "compare_and_write": false, 00:19:52.582 "abort": false, 00:19:52.582 "seek_hole": false, 00:19:52.582 "seek_data": false, 00:19:52.582 "copy": false, 00:19:52.582 "nvme_iov_md": false 00:19:52.582 }, 00:19:52.582 "memory_domains": [ 00:19:52.582 { 00:19:52.582 "dma_device_id": "system", 00:19:52.582 "dma_device_type": 1 00:19:52.582 }, 00:19:52.582 { 00:19:52.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.582 "dma_device_type": 2 00:19:52.582 }, 00:19:52.582 { 00:19:52.582 "dma_device_id": "system", 00:19:52.582 "dma_device_type": 1 00:19:52.582 }, 00:19:52.582 { 00:19:52.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.582 "dma_device_type": 2 00:19:52.582 } 00:19:52.582 ], 00:19:52.582 "driver_specific": { 00:19:52.582 "raid": { 00:19:52.582 "uuid": "ee6908de-9bc8-4a84-b311-dd3dba5e15b4", 00:19:52.582 "strip_size_kb": 0, 00:19:52.582 "state": "online", 00:19:52.582 "raid_level": "raid1", 00:19:52.582 "superblock": true, 00:19:52.582 "num_base_bdevs": 2, 00:19:52.582 "num_base_bdevs_discovered": 2, 00:19:52.582 "num_base_bdevs_operational": 2, 00:19:52.582 "base_bdevs_list": [ 00:19:52.582 { 00:19:52.582 "name": "BaseBdev1", 00:19:52.582 "uuid": "a8626456-89fe-4a3f-a2fb-7ce762286a35", 00:19:52.582 "is_configured": true, 00:19:52.582 "data_offset": 2048, 00:19:52.582 "data_size": 63488 00:19:52.582 }, 00:19:52.582 { 00:19:52.582 "name": "BaseBdev2", 00:19:52.582 "uuid": "a893322e-85c5-4651-9ca5-6469c7a32020", 00:19:52.582 "is_configured": true, 00:19:52.582 "data_offset": 2048, 00:19:52.582 "data_size": 63488 00:19:52.582 } 00:19:52.582 ] 00:19:52.582 } 00:19:52.582 } 00:19:52.582 }' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:52.582 BaseBdev2' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.582 [2024-12-09 23:01:27.870388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:52.582 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.583 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.844 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.844 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.844 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.844 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.844 "name": "Existed_Raid", 00:19:52.844 "uuid": "ee6908de-9bc8-4a84-b311-dd3dba5e15b4", 00:19:52.844 "strip_size_kb": 0, 00:19:52.844 "state": "online", 00:19:52.844 "raid_level": "raid1", 00:19:52.844 "superblock": true, 00:19:52.844 "num_base_bdevs": 2, 00:19:52.844 "num_base_bdevs_discovered": 1, 00:19:52.844 "num_base_bdevs_operational": 1, 00:19:52.844 "base_bdevs_list": [ 00:19:52.844 { 00:19:52.844 "name": null, 00:19:52.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.844 "is_configured": false, 00:19:52.844 "data_offset": 0, 00:19:52.844 "data_size": 63488 00:19:52.844 }, 00:19:52.844 { 00:19:52.844 "name": "BaseBdev2", 00:19:52.844 "uuid": "a893322e-85c5-4651-9ca5-6469c7a32020", 00:19:52.844 "is_configured": true, 00:19:52.844 "data_offset": 2048, 00:19:52.844 "data_size": 63488 00:19:52.844 } 00:19:52.844 ] 00:19:52.844 }' 00:19:52.844 23:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.844 23:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.105 [2024-12-09 23:01:28.300282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:53.105 [2024-12-09 23:01:28.300580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.105 [2024-12-09 23:01:28.369008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.105 [2024-12-09 23:01:28.369071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.105 [2024-12-09 23:01:28.369084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61502 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61502 ']' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61502 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61502 00:19:53.105 killing process with pid 61502 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61502' 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61502 00:19:53.105 [2024-12-09 23:01:28.429127] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.105 23:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61502 00:19:53.105 [2024-12-09 23:01:28.441253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.045 23:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:54.045 00:19:54.045 real 0m3.949s 00:19:54.045 user 0m5.501s 00:19:54.045 sys 0m0.703s 00:19:54.045 23:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.045 ************************************ 00:19:54.045 END TEST raid_state_function_test_sb 00:19:54.045 ************************************ 00:19:54.045 23:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.045 23:01:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:19:54.045 23:01:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:54.045 23:01:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.045 23:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.045 ************************************ 00:19:54.045 START TEST raid_superblock_test 00:19:54.045 ************************************ 00:19:54.045 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:54.045 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:54.045 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:54.045 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:54.045 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:54.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61737 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61737 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61737 ']' 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.046 23:01:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.306 [2024-12-09 23:01:29.423158] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:54.306 [2024-12-09 23:01:29.423318] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61737 ] 00:19:54.306 [2024-12-09 23:01:29.587019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.567 [2024-12-09 23:01:29.728204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.567 [2024-12-09 23:01:29.890072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.567 [2024-12-09 23:01:29.890139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.142 malloc1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.142 [2024-12-09 23:01:30.359785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:55.142 [2024-12-09 23:01:30.360021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.142 [2024-12-09 23:01:30.360073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:55.142 [2024-12-09 23:01:30.360161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.142 [2024-12-09 23:01:30.362733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.142 [2024-12-09 23:01:30.362917] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:55.142 pt1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.142 malloc2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.142 [2024-12-09 23:01:30.405673] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:55.142 [2024-12-09 23:01:30.405878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.142 [2024-12-09 23:01:30.405933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:55.142 [2024-12-09 23:01:30.406004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.142 [2024-12-09 23:01:30.408528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.142 [2024-12-09 23:01:30.408577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:55.142 pt2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.142 [2024-12-09 23:01:30.413727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:55.142 [2024-12-09 23:01:30.415990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:55.142 [2024-12-09 23:01:30.416315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:55.142 [2024-12-09 23:01:30.416419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:55.142 [2024-12-09 23:01:30.416764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:55.142 [2024-12-09 23:01:30.417047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:55.142 [2024-12-09 23:01:30.417188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:55.142 [2024-12-09 23:01:30.417370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.142 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.143 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.143 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.143 "name": "raid_bdev1", 00:19:55.143 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:55.143 "strip_size_kb": 0, 00:19:55.143 "state": "online", 00:19:55.143 "raid_level": "raid1", 00:19:55.143 "superblock": true, 00:19:55.143 "num_base_bdevs": 2, 00:19:55.143 "num_base_bdevs_discovered": 2, 00:19:55.143 "num_base_bdevs_operational": 2, 00:19:55.143 "base_bdevs_list": [ 00:19:55.143 { 00:19:55.143 "name": "pt1", 00:19:55.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.143 "is_configured": true, 00:19:55.143 "data_offset": 2048, 00:19:55.143 "data_size": 63488 00:19:55.143 }, 00:19:55.143 { 00:19:55.143 "name": "pt2", 00:19:55.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.143 "is_configured": true, 00:19:55.143 "data_offset": 2048, 00:19:55.143 "data_size": 63488 00:19:55.143 } 00:19:55.143 ] 00:19:55.143 }' 00:19:55.143 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.143 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:55.404 [2024-12-09 23:01:30.734139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.404 "name": "raid_bdev1", 00:19:55.404 "aliases": [ 00:19:55.404 "c1cfe935-3be5-4572-932d-bcce28279b7e" 00:19:55.404 ], 00:19:55.404 "product_name": "Raid Volume", 00:19:55.404 "block_size": 512, 00:19:55.404 "num_blocks": 63488, 00:19:55.404 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:55.404 "assigned_rate_limits": { 00:19:55.404 "rw_ios_per_sec": 0, 00:19:55.404 "rw_mbytes_per_sec": 0, 00:19:55.404 "r_mbytes_per_sec": 0, 00:19:55.404 "w_mbytes_per_sec": 0 00:19:55.404 }, 00:19:55.404 "claimed": false, 00:19:55.404 "zoned": false, 00:19:55.404 "supported_io_types": { 00:19:55.404 "read": true, 00:19:55.404 "write": true, 00:19:55.404 "unmap": false, 00:19:55.404 "flush": false, 00:19:55.404 "reset": true, 00:19:55.404 "nvme_admin": false, 00:19:55.404 "nvme_io": false, 00:19:55.404 "nvme_io_md": false, 00:19:55.404 "write_zeroes": true, 00:19:55.404 "zcopy": false, 00:19:55.404 "get_zone_info": false, 00:19:55.404 "zone_management": false, 00:19:55.404 "zone_append": false, 00:19:55.404 "compare": false, 00:19:55.404 "compare_and_write": false, 00:19:55.404 "abort": false, 00:19:55.404 "seek_hole": false, 00:19:55.404 "seek_data": false, 00:19:55.404 "copy": false, 00:19:55.404 "nvme_iov_md": false 00:19:55.404 }, 00:19:55.404 "memory_domains": [ 00:19:55.404 { 00:19:55.404 "dma_device_id": "system", 00:19:55.404 "dma_device_type": 1 00:19:55.404 }, 00:19:55.404 { 00:19:55.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.404 "dma_device_type": 2 00:19:55.404 }, 00:19:55.404 { 00:19:55.404 "dma_device_id": "system", 00:19:55.404 "dma_device_type": 1 00:19:55.404 }, 00:19:55.404 { 00:19:55.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.404 "dma_device_type": 2 00:19:55.404 } 00:19:55.404 ], 00:19:55.404 "driver_specific": { 00:19:55.404 "raid": { 00:19:55.404 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:55.404 "strip_size_kb": 0, 00:19:55.404 "state": "online", 00:19:55.404 "raid_level": "raid1", 00:19:55.404 "superblock": true, 00:19:55.404 "num_base_bdevs": 2, 00:19:55.404 "num_base_bdevs_discovered": 2, 00:19:55.404 "num_base_bdevs_operational": 2, 00:19:55.404 "base_bdevs_list": [ 00:19:55.404 { 00:19:55.404 "name": "pt1", 00:19:55.404 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.404 "is_configured": true, 00:19:55.404 "data_offset": 2048, 00:19:55.404 "data_size": 63488 00:19:55.404 }, 00:19:55.404 { 00:19:55.404 "name": "pt2", 00:19:55.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.404 "is_configured": true, 00:19:55.404 "data_offset": 2048, 00:19:55.404 "data_size": 63488 00:19:55.404 } 00:19:55.404 ] 00:19:55.404 } 00:19:55.404 } 00:19:55.404 }' 00:19:55.404 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:55.665 pt2' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 [2024-12-09 23:01:30.902147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c1cfe935-3be5-4572-932d-bcce28279b7e 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c1cfe935-3be5-4572-932d-bcce28279b7e ']' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 [2024-12-09 23:01:30.929781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.665 [2024-12-09 23:01:30.929947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.665 [2024-12-09 23:01:30.930125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.665 [2024-12-09 23:01:30.930209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.665 [2024-12-09 23:01:30.930224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:55.665 23:01:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.665 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.665 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:55.665 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:55.665 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:55.665 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:55.665 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.927 [2024-12-09 23:01:31.029830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:55.927 [2024-12-09 23:01:31.032187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:55.927 [2024-12-09 23:01:31.032267] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:55.927 [2024-12-09 23:01:31.032336] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:55.927 [2024-12-09 23:01:31.032352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.927 [2024-12-09 23:01:31.032363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:55.927 request: 00:19:55.927 { 00:19:55.927 "name": "raid_bdev1", 00:19:55.927 "raid_level": "raid1", 00:19:55.927 "base_bdevs": [ 00:19:55.927 "malloc1", 00:19:55.927 "malloc2" 00:19:55.927 ], 00:19:55.927 "superblock": false, 00:19:55.927 "method": "bdev_raid_create", 00:19:55.927 "req_id": 1 00:19:55.927 } 00:19:55.927 Got JSON-RPC error response 00:19:55.927 response: 00:19:55.927 { 00:19:55.927 "code": -17, 00:19:55.927 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:55.927 } 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.927 [2024-12-09 23:01:31.081835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:55.927 [2024-12-09 23:01:31.082049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.927 [2024-12-09 23:01:31.082081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:55.927 [2024-12-09 23:01:31.082093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.927 [2024-12-09 23:01:31.084725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.927 [2024-12-09 23:01:31.084776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:55.927 [2024-12-09 23:01:31.084877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:55.927 [2024-12-09 23:01:31.084970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:55.927 pt1 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.927 "name": "raid_bdev1", 00:19:55.927 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:55.927 "strip_size_kb": 0, 00:19:55.927 "state": "configuring", 00:19:55.927 "raid_level": "raid1", 00:19:55.927 "superblock": true, 00:19:55.927 "num_base_bdevs": 2, 00:19:55.927 "num_base_bdevs_discovered": 1, 00:19:55.927 "num_base_bdevs_operational": 2, 00:19:55.927 "base_bdevs_list": [ 00:19:55.927 { 00:19:55.927 "name": "pt1", 00:19:55.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.927 "is_configured": true, 00:19:55.927 "data_offset": 2048, 00:19:55.927 "data_size": 63488 00:19:55.927 }, 00:19:55.927 { 00:19:55.927 "name": null, 00:19:55.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.927 "is_configured": false, 00:19:55.927 "data_offset": 2048, 00:19:55.927 "data_size": 63488 00:19:55.927 } 00:19:55.927 ] 00:19:55.927 }' 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.927 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.188 [2024-12-09 23:01:31.417935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.188 [2024-12-09 23:01:31.418194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.188 [2024-12-09 23:01:31.418253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:56.188 [2024-12-09 23:01:31.418339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.188 [2024-12-09 23:01:31.418896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.188 [2024-12-09 23:01:31.418937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.188 [2024-12-09 23:01:31.419031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:56.188 [2024-12-09 23:01:31.419063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.188 [2024-12-09 23:01:31.419217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:56.188 [2024-12-09 23:01:31.419231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:56.188 [2024-12-09 23:01:31.419512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:56.188 [2024-12-09 23:01:31.419662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:56.188 [2024-12-09 23:01:31.419677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:56.188 [2024-12-09 23:01:31.419824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.188 pt2 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.188 "name": "raid_bdev1", 00:19:56.188 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:56.188 "strip_size_kb": 0, 00:19:56.188 "state": "online", 00:19:56.188 "raid_level": "raid1", 00:19:56.188 "superblock": true, 00:19:56.188 "num_base_bdevs": 2, 00:19:56.188 "num_base_bdevs_discovered": 2, 00:19:56.188 "num_base_bdevs_operational": 2, 00:19:56.188 "base_bdevs_list": [ 00:19:56.188 { 00:19:56.188 "name": "pt1", 00:19:56.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.188 "is_configured": true, 00:19:56.188 "data_offset": 2048, 00:19:56.188 "data_size": 63488 00:19:56.188 }, 00:19:56.188 { 00:19:56.188 "name": "pt2", 00:19:56.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.188 "is_configured": true, 00:19:56.188 "data_offset": 2048, 00:19:56.188 "data_size": 63488 00:19:56.188 } 00:19:56.188 ] 00:19:56.188 }' 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.188 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.451 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 [2024-12-09 23:01:31.742315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:56.452 "name": "raid_bdev1", 00:19:56.452 "aliases": [ 00:19:56.452 "c1cfe935-3be5-4572-932d-bcce28279b7e" 00:19:56.452 ], 00:19:56.452 "product_name": "Raid Volume", 00:19:56.452 "block_size": 512, 00:19:56.452 "num_blocks": 63488, 00:19:56.452 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:56.452 "assigned_rate_limits": { 00:19:56.452 "rw_ios_per_sec": 0, 00:19:56.452 "rw_mbytes_per_sec": 0, 00:19:56.452 "r_mbytes_per_sec": 0, 00:19:56.452 "w_mbytes_per_sec": 0 00:19:56.452 }, 00:19:56.452 "claimed": false, 00:19:56.452 "zoned": false, 00:19:56.452 "supported_io_types": { 00:19:56.452 "read": true, 00:19:56.452 "write": true, 00:19:56.452 "unmap": false, 00:19:56.452 "flush": false, 00:19:56.452 "reset": true, 00:19:56.452 "nvme_admin": false, 00:19:56.452 "nvme_io": false, 00:19:56.452 "nvme_io_md": false, 00:19:56.452 "write_zeroes": true, 00:19:56.452 "zcopy": false, 00:19:56.452 "get_zone_info": false, 00:19:56.452 "zone_management": false, 00:19:56.452 "zone_append": false, 00:19:56.452 "compare": false, 00:19:56.452 "compare_and_write": false, 00:19:56.452 "abort": false, 00:19:56.452 "seek_hole": false, 00:19:56.452 "seek_data": false, 00:19:56.452 "copy": false, 00:19:56.452 "nvme_iov_md": false 00:19:56.452 }, 00:19:56.452 "memory_domains": [ 00:19:56.452 { 00:19:56.452 "dma_device_id": "system", 00:19:56.452 "dma_device_type": 1 00:19:56.452 }, 00:19:56.452 { 00:19:56.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.452 "dma_device_type": 2 00:19:56.452 }, 00:19:56.452 { 00:19:56.452 "dma_device_id": "system", 00:19:56.452 "dma_device_type": 1 00:19:56.452 }, 00:19:56.452 { 00:19:56.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.452 "dma_device_type": 2 00:19:56.452 } 00:19:56.452 ], 00:19:56.452 "driver_specific": { 00:19:56.452 "raid": { 00:19:56.452 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:56.452 "strip_size_kb": 0, 00:19:56.452 "state": "online", 00:19:56.452 "raid_level": "raid1", 00:19:56.452 "superblock": true, 00:19:56.452 "num_base_bdevs": 2, 00:19:56.452 "num_base_bdevs_discovered": 2, 00:19:56.452 "num_base_bdevs_operational": 2, 00:19:56.452 "base_bdevs_list": [ 00:19:56.452 { 00:19:56.452 "name": "pt1", 00:19:56.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.452 "is_configured": true, 00:19:56.452 "data_offset": 2048, 00:19:56.452 "data_size": 63488 00:19:56.452 }, 00:19:56.452 { 00:19:56.452 "name": "pt2", 00:19:56.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.452 "is_configured": true, 00:19:56.452 "data_offset": 2048, 00:19:56.452 "data_size": 63488 00:19:56.452 } 00:19:56.452 ] 00:19:56.452 } 00:19:56.452 } 00:19:56.452 }' 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:56.452 pt2' 00:19:56.452 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 [2024-12-09 23:01:31.910338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c1cfe935-3be5-4572-932d-bcce28279b7e '!=' c1cfe935-3be5-4572-932d-bcce28279b7e ']' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 [2024-12-09 23:01:31.942133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.711 "name": "raid_bdev1", 00:19:56.711 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:56.711 "strip_size_kb": 0, 00:19:56.711 "state": "online", 00:19:56.711 "raid_level": "raid1", 00:19:56.711 "superblock": true, 00:19:56.711 "num_base_bdevs": 2, 00:19:56.711 "num_base_bdevs_discovered": 1, 00:19:56.711 "num_base_bdevs_operational": 1, 00:19:56.711 "base_bdevs_list": [ 00:19:56.711 { 00:19:56.711 "name": null, 00:19:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.711 "is_configured": false, 00:19:56.711 "data_offset": 0, 00:19:56.711 "data_size": 63488 00:19:56.711 }, 00:19:56.711 { 00:19:56.711 "name": "pt2", 00:19:56.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.711 "is_configured": true, 00:19:56.711 "data_offset": 2048, 00:19:56.711 "data_size": 63488 00:19:56.711 } 00:19:56.711 ] 00:19:56.711 }' 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.711 23:01:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.972 [2024-12-09 23:01:32.266143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.972 [2024-12-09 23:01:32.266181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.972 [2024-12-09 23:01:32.266269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.972 [2024-12-09 23:01:32.266325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.972 [2024-12-09 23:01:32.266338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.972 [2024-12-09 23:01:32.314148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.972 [2024-12-09 23:01:32.314368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.972 [2024-12-09 23:01:32.314415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:56.972 [2024-12-09 23:01:32.314600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.972 [2024-12-09 23:01:32.317244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.972 [2024-12-09 23:01:32.317302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.972 [2024-12-09 23:01:32.317402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:56.972 [2024-12-09 23:01:32.317452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.972 [2024-12-09 23:01:32.317561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:56.972 [2024-12-09 23:01:32.317576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:56.972 [2024-12-09 23:01:32.317842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:56.972 pt2 00:19:56.972 [2024-12-09 23:01:32.317993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:56.972 [2024-12-09 23:01:32.318010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:56.972 [2024-12-09 23:01:32.318220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.972 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.233 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.233 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.233 "name": "raid_bdev1", 00:19:57.233 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:57.233 "strip_size_kb": 0, 00:19:57.233 "state": "online", 00:19:57.233 "raid_level": "raid1", 00:19:57.233 "superblock": true, 00:19:57.233 "num_base_bdevs": 2, 00:19:57.233 "num_base_bdevs_discovered": 1, 00:19:57.233 "num_base_bdevs_operational": 1, 00:19:57.233 "base_bdevs_list": [ 00:19:57.233 { 00:19:57.233 "name": null, 00:19:57.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.233 "is_configured": false, 00:19:57.233 "data_offset": 2048, 00:19:57.233 "data_size": 63488 00:19:57.233 }, 00:19:57.233 { 00:19:57.233 "name": "pt2", 00:19:57.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.233 "is_configured": true, 00:19:57.233 "data_offset": 2048, 00:19:57.233 "data_size": 63488 00:19:57.233 } 00:19:57.233 ] 00:19:57.233 }' 00:19:57.233 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.233 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 [2024-12-09 23:01:32.646256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.496 [2024-12-09 23:01:32.646296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.496 [2024-12-09 23:01:32.646384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.496 [2024-12-09 23:01:32.646448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.496 [2024-12-09 23:01:32.646459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 [2024-12-09 23:01:32.690275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:57.496 [2024-12-09 23:01:32.690492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.496 [2024-12-09 23:01:32.690543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:57.496 [2024-12-09 23:01:32.690630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.496 [2024-12-09 23:01:32.693293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.496 [2024-12-09 23:01:32.693470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:57.496 [2024-12-09 23:01:32.693587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:57.496 [2024-12-09 23:01:32.693641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:57.496 [2024-12-09 23:01:32.693800] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:57.496 [2024-12-09 23:01:32.693812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.496 [2024-12-09 23:01:32.693830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:57.496 [2024-12-09 23:01:32.693882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.496 [2024-12-09 23:01:32.693968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:57.496 [2024-12-09 23:01:32.693977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:57.496 [2024-12-09 23:01:32.694431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.496 [2024-12-09 23:01:32.694633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:57.496 [2024-12-09 23:01:32.694670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:57.496 [2024-12-09 23:01:32.695422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.496 pt1 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.496 "name": "raid_bdev1", 00:19:57.496 "uuid": "c1cfe935-3be5-4572-932d-bcce28279b7e", 00:19:57.496 "strip_size_kb": 0, 00:19:57.496 "state": "online", 00:19:57.496 "raid_level": "raid1", 00:19:57.496 "superblock": true, 00:19:57.496 "num_base_bdevs": 2, 00:19:57.496 "num_base_bdevs_discovered": 1, 00:19:57.496 "num_base_bdevs_operational": 1, 00:19:57.496 "base_bdevs_list": [ 00:19:57.496 { 00:19:57.496 "name": null, 00:19:57.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.496 "is_configured": false, 00:19:57.496 "data_offset": 2048, 00:19:57.496 "data_size": 63488 00:19:57.496 }, 00:19:57.496 { 00:19:57.496 "name": "pt2", 00:19:57.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.496 "is_configured": true, 00:19:57.496 "data_offset": 2048, 00:19:57.496 "data_size": 63488 00:19:57.496 } 00:19:57.496 ] 00:19:57.496 }' 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.496 23:01:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 [2024-12-09 23:01:33.047708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c1cfe935-3be5-4572-932d-bcce28279b7e '!=' c1cfe935-3be5-4572-932d-bcce28279b7e ']' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61737 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61737 ']' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61737 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61737 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.758 killing process with pid 61737 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61737' 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61737 00:19:57.758 [2024-12-09 23:01:33.106705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.758 23:01:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61737 00:19:57.758 [2024-12-09 23:01:33.106821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.758 [2024-12-09 23:01:33.106879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.758 [2024-12-09 23:01:33.106896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:58.020 [2024-12-09 23:01:33.252910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.964 ************************************ 00:19:58.964 END TEST raid_superblock_test 00:19:58.964 ************************************ 00:19:58.964 23:01:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:58.964 00:19:58.964 real 0m4.727s 00:19:58.964 user 0m6.960s 00:19:58.964 sys 0m0.862s 00:19:58.964 23:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.964 23:01:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.964 23:01:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:19:58.964 23:01:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:58.964 23:01:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.964 23:01:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.964 ************************************ 00:19:58.964 START TEST raid_read_error_test 00:19:58.964 ************************************ 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2D7sGyDbA1 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62053 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62053 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62053 ']' 00:19:58.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.964 23:01:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:58.964 [2024-12-09 23:01:34.225909] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:19:58.964 [2024-12-09 23:01:34.226063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62053 ] 00:19:59.225 [2024-12-09 23:01:34.392585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.225 [2024-12-09 23:01:34.537548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.484 [2024-12-09 23:01:34.702780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.484 [2024-12-09 23:01:34.702848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.745 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.745 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:59.745 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:59.745 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:59.745 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.745 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.007 BaseBdev1_malloc 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.007 true 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.007 [2024-12-09 23:01:35.143140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:00.007 [2024-12-09 23:01:35.143382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.007 [2024-12-09 23:01:35.143415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:00.007 [2024-12-09 23:01:35.143428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.007 [2024-12-09 23:01:35.146002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.007 BaseBdev1 00:20:00.007 [2024-12-09 23:01:35.146247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.007 BaseBdev2_malloc 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.007 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.007 true 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.008 [2024-12-09 23:01:35.192968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:00.008 [2024-12-09 23:01:35.193209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.008 [2024-12-09 23:01:35.193258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:00.008 [2024-12-09 23:01:35.193334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.008 [2024-12-09 23:01:35.195821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.008 BaseBdev2 00:20:00.008 [2024-12-09 23:01:35.195999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.008 [2024-12-09 23:01:35.201068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.008 [2024-12-09 23:01:35.203266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.008 [2024-12-09 23:01:35.203501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:00.008 [2024-12-09 23:01:35.203519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:00.008 [2024-12-09 23:01:35.203811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:00.008 [2024-12-09 23:01:35.203998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:00.008 [2024-12-09 23:01:35.204008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:00.008 [2024-12-09 23:01:35.204217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.008 "name": "raid_bdev1", 00:20:00.008 "uuid": "4c27084b-abac-4a90-ba66-c62259bd43c2", 00:20:00.008 "strip_size_kb": 0, 00:20:00.008 "state": "online", 00:20:00.008 "raid_level": "raid1", 00:20:00.008 "superblock": true, 00:20:00.008 "num_base_bdevs": 2, 00:20:00.008 "num_base_bdevs_discovered": 2, 00:20:00.008 "num_base_bdevs_operational": 2, 00:20:00.008 "base_bdevs_list": [ 00:20:00.008 { 00:20:00.008 "name": "BaseBdev1", 00:20:00.008 "uuid": "2d7c14c6-c313-5fc9-82ba-1cae584eafce", 00:20:00.008 "is_configured": true, 00:20:00.008 "data_offset": 2048, 00:20:00.008 "data_size": 63488 00:20:00.008 }, 00:20:00.008 { 00:20:00.008 "name": "BaseBdev2", 00:20:00.008 "uuid": "17a5f9cb-075e-5083-aaf5-4dc7b3b508a5", 00:20:00.008 "is_configured": true, 00:20:00.008 "data_offset": 2048, 00:20:00.008 "data_size": 63488 00:20:00.008 } 00:20:00.008 ] 00:20:00.008 }' 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.008 23:01:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.269 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:00.269 23:01:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:00.269 [2024-12-09 23:01:35.618278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.213 "name": "raid_bdev1", 00:20:01.213 "uuid": "4c27084b-abac-4a90-ba66-c62259bd43c2", 00:20:01.213 "strip_size_kb": 0, 00:20:01.213 "state": "online", 00:20:01.213 "raid_level": "raid1", 00:20:01.213 "superblock": true, 00:20:01.213 "num_base_bdevs": 2, 00:20:01.213 "num_base_bdevs_discovered": 2, 00:20:01.213 "num_base_bdevs_operational": 2, 00:20:01.213 "base_bdevs_list": [ 00:20:01.213 { 00:20:01.213 "name": "BaseBdev1", 00:20:01.213 "uuid": "2d7c14c6-c313-5fc9-82ba-1cae584eafce", 00:20:01.213 "is_configured": true, 00:20:01.213 "data_offset": 2048, 00:20:01.213 "data_size": 63488 00:20:01.213 }, 00:20:01.213 { 00:20:01.213 "name": "BaseBdev2", 00:20:01.213 "uuid": "17a5f9cb-075e-5083-aaf5-4dc7b3b508a5", 00:20:01.213 "is_configured": true, 00:20:01.213 "data_offset": 2048, 00:20:01.213 "data_size": 63488 00:20:01.213 } 00:20:01.213 ] 00:20:01.213 }' 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.213 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.786 [2024-12-09 23:01:36.860839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:01.786 [2024-12-09 23:01:36.860886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.786 [2024-12-09 23:01:36.864118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.786 [2024-12-09 23:01:36.864180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.786 [2024-12-09 23:01:36.864273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.786 [2024-12-09 23:01:36.864286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:01.786 { 00:20:01.786 "results": [ 00:20:01.786 { 00:20:01.786 "job": "raid_bdev1", 00:20:01.786 "core_mask": "0x1", 00:20:01.786 "workload": "randrw", 00:20:01.786 "percentage": 50, 00:20:01.786 "status": "finished", 00:20:01.786 "queue_depth": 1, 00:20:01.786 "io_size": 131072, 00:20:01.786 "runtime": 1.240505, 00:20:01.786 "iops": 13889.504677530522, 00:20:01.786 "mibps": 1736.1880846913152, 00:20:01.786 "io_failed": 0, 00:20:01.786 "io_timeout": 0, 00:20:01.786 "avg_latency_us": 68.64548167328898, 00:20:01.786 "min_latency_us": 30.916923076923077, 00:20:01.786 "max_latency_us": 1739.2246153846154 00:20:01.786 } 00:20:01.786 ], 00:20:01.786 "core_count": 1 00:20:01.786 } 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62053 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62053 ']' 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62053 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62053 00:20:01.786 killing process with pid 62053 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62053' 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62053 00:20:01.786 23:01:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62053 00:20:01.786 [2024-12-09 23:01:36.893889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.786 [2024-12-09 23:01:36.988074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2D7sGyDbA1 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:02.729 ************************************ 00:20:02.729 END TEST raid_read_error_test 00:20:02.729 ************************************ 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:02.729 00:20:02.729 real 0m3.730s 00:20:02.729 user 0m4.350s 00:20:02.729 sys 0m0.497s 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.729 23:01:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.729 23:01:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:20:02.729 23:01:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:02.729 23:01:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.729 23:01:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.729 ************************************ 00:20:02.729 START TEST raid_write_error_test 00:20:02.729 ************************************ 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yeibJI1rUb 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62193 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62193 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:02.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62193 ']' 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.729 23:01:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.729 [2024-12-09 23:01:38.031008] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:02.729 [2024-12-09 23:01:38.031427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62193 ] 00:20:02.991 [2024-12-09 23:01:38.194763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.991 [2024-12-09 23:01:38.340012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.254 [2024-12-09 23:01:38.510221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.254 [2024-12-09 23:01:38.510279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.833 23:01:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.833 23:01:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:03.833 23:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:03.833 23:01:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:03.833 23:01:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 23:01:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 BaseBdev1_malloc 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 true 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 [2024-12-09 23:01:39.044200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:03.833 [2024-12-09 23:01:39.044447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.833 [2024-12-09 23:01:39.044484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:03.833 [2024-12-09 23:01:39.044496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.833 [2024-12-09 23:01:39.047089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.833 [2024-12-09 23:01:39.047172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:03.833 BaseBdev1 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 BaseBdev2_malloc 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 true 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.834 [2024-12-09 23:01:39.102117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:03.834 [2024-12-09 23:01:39.102355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.834 [2024-12-09 23:01:39.102403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:03.834 [2024-12-09 23:01:39.102549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.834 [2024-12-09 23:01:39.105150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.834 [2024-12-09 23:01:39.105205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:03.834 BaseBdev2 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.834 [2024-12-09 23:01:39.110202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.834 [2024-12-09 23:01:39.112541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.834 [2024-12-09 23:01:39.112936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:03.834 [2024-12-09 23:01:39.112989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:03.834 [2024-12-09 23:01:39.113428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:03.834 [2024-12-09 23:01:39.113740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:03.834 [2024-12-09 23:01:39.113779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:03.834 [2024-12-09 23:01:39.114127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.834 "name": "raid_bdev1", 00:20:03.834 "uuid": "ee29cc36-f88a-45a9-9209-40e46419c048", 00:20:03.834 "strip_size_kb": 0, 00:20:03.834 "state": "online", 00:20:03.834 "raid_level": "raid1", 00:20:03.834 "superblock": true, 00:20:03.834 "num_base_bdevs": 2, 00:20:03.834 "num_base_bdevs_discovered": 2, 00:20:03.834 "num_base_bdevs_operational": 2, 00:20:03.834 "base_bdevs_list": [ 00:20:03.834 { 00:20:03.834 "name": "BaseBdev1", 00:20:03.834 "uuid": "26abd947-8c64-54b0-8148-6f85a968069e", 00:20:03.834 "is_configured": true, 00:20:03.834 "data_offset": 2048, 00:20:03.834 "data_size": 63488 00:20:03.834 }, 00:20:03.834 { 00:20:03.834 "name": "BaseBdev2", 00:20:03.834 "uuid": "fa913b6c-e13b-5f4b-aecc-e5bc04953244", 00:20:03.834 "is_configured": true, 00:20:03.834 "data_offset": 2048, 00:20:03.834 "data_size": 63488 00:20:03.834 } 00:20:03.834 ] 00:20:03.834 }' 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.834 23:01:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.095 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:04.095 23:01:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:04.355 [2024-12-09 23:01:39.535424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.330 [2024-12-09 23:01:40.443257] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:05.330 [2024-12-09 23:01:40.443336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:05.330 [2024-12-09 23:01:40.443552] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.330 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.331 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.331 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.331 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.331 "name": "raid_bdev1", 00:20:05.331 "uuid": "ee29cc36-f88a-45a9-9209-40e46419c048", 00:20:05.331 "strip_size_kb": 0, 00:20:05.331 "state": "online", 00:20:05.331 "raid_level": "raid1", 00:20:05.331 "superblock": true, 00:20:05.331 "num_base_bdevs": 2, 00:20:05.331 "num_base_bdevs_discovered": 1, 00:20:05.331 "num_base_bdevs_operational": 1, 00:20:05.331 "base_bdevs_list": [ 00:20:05.331 { 00:20:05.331 "name": null, 00:20:05.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.331 "is_configured": false, 00:20:05.331 "data_offset": 0, 00:20:05.331 "data_size": 63488 00:20:05.331 }, 00:20:05.331 { 00:20:05.331 "name": "BaseBdev2", 00:20:05.331 "uuid": "fa913b6c-e13b-5f4b-aecc-e5bc04953244", 00:20:05.331 "is_configured": true, 00:20:05.331 "data_offset": 2048, 00:20:05.331 "data_size": 63488 00:20:05.331 } 00:20:05.331 ] 00:20:05.331 }' 00:20:05.331 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.331 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.594 [2024-12-09 23:01:40.781474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.594 [2024-12-09 23:01:40.781514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.594 [2024-12-09 23:01:40.784887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.594 [2024-12-09 23:01:40.785092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.594 [2024-12-09 23:01:40.785223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.594 [2024-12-09 23:01:40.785322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:05.594 { 00:20:05.594 "results": [ 00:20:05.594 { 00:20:05.594 "job": "raid_bdev1", 00:20:05.594 "core_mask": "0x1", 00:20:05.594 "workload": "randrw", 00:20:05.594 "percentage": 50, 00:20:05.594 "status": "finished", 00:20:05.594 "queue_depth": 1, 00:20:05.594 "io_size": 131072, 00:20:05.594 "runtime": 1.24358, 00:20:05.594 "iops": 15662.844368677528, 00:20:05.594 "mibps": 1957.855546084691, 00:20:05.594 "io_failed": 0, 00:20:05.594 "io_timeout": 0, 00:20:05.594 "avg_latency_us": 60.488899349956945, 00:20:05.594 "min_latency_us": 29.53846153846154, 00:20:05.594 "max_latency_us": 1726.6215384615384 00:20:05.594 } 00:20:05.594 ], 00:20:05.594 "core_count": 1 00:20:05.594 } 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62193 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62193 ']' 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62193 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62193 00:20:05.594 killing process with pid 62193 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62193' 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62193 00:20:05.594 23:01:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62193 00:20:05.594 [2024-12-09 23:01:40.816173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:05.594 [2024-12-09 23:01:40.913731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yeibJI1rUb 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:06.540 ************************************ 00:20:06.540 END TEST raid_write_error_test 00:20:06.540 ************************************ 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:06.540 00:20:06.540 real 0m3.859s 00:20:06.540 user 0m4.520s 00:20:06.540 sys 0m0.555s 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.540 23:01:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 23:01:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:20:06.540 23:01:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:06.540 23:01:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:20:06.540 23:01:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:06.540 23:01:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.540 23:01:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 ************************************ 00:20:06.540 START TEST raid_state_function_test 00:20:06.540 ************************************ 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:06.540 Process raid pid: 62326 00:20:06.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62326 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62326' 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62326 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62326 ']' 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:06.540 23:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.802 [2024-12-09 23:01:41.955223] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:06.802 [2024-12-09 23:01:41.955388] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.802 [2024-12-09 23:01:42.125019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.064 [2024-12-09 23:01:42.275347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.325 [2024-12-09 23:01:42.448671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.325 [2024-12-09 23:01:42.448724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.585 [2024-12-09 23:01:42.858886] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:07.585 [2024-12-09 23:01:42.859162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:07.585 [2024-12-09 23:01:42.859252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.585 [2024-12-09 23:01:42.859284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.585 [2024-12-09 23:01:42.859303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.585 [2024-12-09 23:01:42.859323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.585 "name": "Existed_Raid", 00:20:07.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.585 "strip_size_kb": 64, 00:20:07.585 "state": "configuring", 00:20:07.585 "raid_level": "raid0", 00:20:07.585 "superblock": false, 00:20:07.585 "num_base_bdevs": 3, 00:20:07.585 "num_base_bdevs_discovered": 0, 00:20:07.585 "num_base_bdevs_operational": 3, 00:20:07.585 "base_bdevs_list": [ 00:20:07.585 { 00:20:07.585 "name": "BaseBdev1", 00:20:07.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.585 "is_configured": false, 00:20:07.585 "data_offset": 0, 00:20:07.585 "data_size": 0 00:20:07.585 }, 00:20:07.585 { 00:20:07.585 "name": "BaseBdev2", 00:20:07.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.585 "is_configured": false, 00:20:07.585 "data_offset": 0, 00:20:07.585 "data_size": 0 00:20:07.585 }, 00:20:07.585 { 00:20:07.585 "name": "BaseBdev3", 00:20:07.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.585 "is_configured": false, 00:20:07.585 "data_offset": 0, 00:20:07.585 "data_size": 0 00:20:07.585 } 00:20:07.585 ] 00:20:07.585 }' 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.585 23:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.854 [2024-12-09 23:01:43.186931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.854 [2024-12-09 23:01:43.186982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.854 [2024-12-09 23:01:43.194938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:07.854 [2024-12-09 23:01:43.195005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:07.854 [2024-12-09 23:01:43.195015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.854 [2024-12-09 23:01:43.195026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.854 [2024-12-09 23:01:43.195033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.854 [2024-12-09 23:01:43.195043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.854 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.116 BaseBdev1 00:20:08.117 [2024-12-09 23:01:43.234275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.117 [ 00:20:08.117 { 00:20:08.117 "name": "BaseBdev1", 00:20:08.117 "aliases": [ 00:20:08.117 "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27" 00:20:08.117 ], 00:20:08.117 "product_name": "Malloc disk", 00:20:08.117 "block_size": 512, 00:20:08.117 "num_blocks": 65536, 00:20:08.117 "uuid": "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27", 00:20:08.117 "assigned_rate_limits": { 00:20:08.117 "rw_ios_per_sec": 0, 00:20:08.117 "rw_mbytes_per_sec": 0, 00:20:08.117 "r_mbytes_per_sec": 0, 00:20:08.117 "w_mbytes_per_sec": 0 00:20:08.117 }, 00:20:08.117 "claimed": true, 00:20:08.117 "claim_type": "exclusive_write", 00:20:08.117 "zoned": false, 00:20:08.117 "supported_io_types": { 00:20:08.117 "read": true, 00:20:08.117 "write": true, 00:20:08.117 "unmap": true, 00:20:08.117 "flush": true, 00:20:08.117 "reset": true, 00:20:08.117 "nvme_admin": false, 00:20:08.117 "nvme_io": false, 00:20:08.117 "nvme_io_md": false, 00:20:08.117 "write_zeroes": true, 00:20:08.117 "zcopy": true, 00:20:08.117 "get_zone_info": false, 00:20:08.117 "zone_management": false, 00:20:08.117 "zone_append": false, 00:20:08.117 "compare": false, 00:20:08.117 "compare_and_write": false, 00:20:08.117 "abort": true, 00:20:08.117 "seek_hole": false, 00:20:08.117 "seek_data": false, 00:20:08.117 "copy": true, 00:20:08.117 "nvme_iov_md": false 00:20:08.117 }, 00:20:08.117 "memory_domains": [ 00:20:08.117 { 00:20:08.117 "dma_device_id": "system", 00:20:08.117 "dma_device_type": 1 00:20:08.117 }, 00:20:08.117 { 00:20:08.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.117 "dma_device_type": 2 00:20:08.117 } 00:20:08.117 ], 00:20:08.117 "driver_specific": {} 00:20:08.117 } 00:20:08.117 ] 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.117 "name": "Existed_Raid", 00:20:08.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.117 "strip_size_kb": 64, 00:20:08.117 "state": "configuring", 00:20:08.117 "raid_level": "raid0", 00:20:08.117 "superblock": false, 00:20:08.117 "num_base_bdevs": 3, 00:20:08.117 "num_base_bdevs_discovered": 1, 00:20:08.117 "num_base_bdevs_operational": 3, 00:20:08.117 "base_bdevs_list": [ 00:20:08.117 { 00:20:08.117 "name": "BaseBdev1", 00:20:08.117 "uuid": "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27", 00:20:08.117 "is_configured": true, 00:20:08.117 "data_offset": 0, 00:20:08.117 "data_size": 65536 00:20:08.117 }, 00:20:08.117 { 00:20:08.117 "name": "BaseBdev2", 00:20:08.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.117 "is_configured": false, 00:20:08.117 "data_offset": 0, 00:20:08.117 "data_size": 0 00:20:08.117 }, 00:20:08.117 { 00:20:08.117 "name": "BaseBdev3", 00:20:08.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.117 "is_configured": false, 00:20:08.117 "data_offset": 0, 00:20:08.117 "data_size": 0 00:20:08.117 } 00:20:08.117 ] 00:20:08.117 }' 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.117 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.379 [2024-12-09 23:01:43.582411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:08.379 [2024-12-09 23:01:43.582482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.379 [2024-12-09 23:01:43.590491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.379 [2024-12-09 23:01:43.592727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.379 [2024-12-09 23:01:43.592792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.379 [2024-12-09 23:01:43.592805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:08.379 [2024-12-09 23:01:43.592816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.379 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.380 "name": "Existed_Raid", 00:20:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.380 "strip_size_kb": 64, 00:20:08.380 "state": "configuring", 00:20:08.380 "raid_level": "raid0", 00:20:08.380 "superblock": false, 00:20:08.380 "num_base_bdevs": 3, 00:20:08.380 "num_base_bdevs_discovered": 1, 00:20:08.380 "num_base_bdevs_operational": 3, 00:20:08.380 "base_bdevs_list": [ 00:20:08.380 { 00:20:08.380 "name": "BaseBdev1", 00:20:08.380 "uuid": "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27", 00:20:08.380 "is_configured": true, 00:20:08.380 "data_offset": 0, 00:20:08.380 "data_size": 65536 00:20:08.380 }, 00:20:08.380 { 00:20:08.380 "name": "BaseBdev2", 00:20:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.380 "is_configured": false, 00:20:08.380 "data_offset": 0, 00:20:08.380 "data_size": 0 00:20:08.380 }, 00:20:08.380 { 00:20:08.380 "name": "BaseBdev3", 00:20:08.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.380 "is_configured": false, 00:20:08.380 "data_offset": 0, 00:20:08.380 "data_size": 0 00:20:08.380 } 00:20:08.380 ] 00:20:08.380 }' 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.380 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 [2024-12-09 23:01:43.951137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.640 BaseBdev2 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 [ 00:20:08.640 { 00:20:08.640 "name": "BaseBdev2", 00:20:08.640 "aliases": [ 00:20:08.640 "5312c45a-ae99-4cbd-8e69-ee9fd4f0f84a" 00:20:08.640 ], 00:20:08.640 "product_name": "Malloc disk", 00:20:08.640 "block_size": 512, 00:20:08.640 "num_blocks": 65536, 00:20:08.640 "uuid": "5312c45a-ae99-4cbd-8e69-ee9fd4f0f84a", 00:20:08.640 "assigned_rate_limits": { 00:20:08.640 "rw_ios_per_sec": 0, 00:20:08.640 "rw_mbytes_per_sec": 0, 00:20:08.640 "r_mbytes_per_sec": 0, 00:20:08.640 "w_mbytes_per_sec": 0 00:20:08.640 }, 00:20:08.640 "claimed": true, 00:20:08.640 "claim_type": "exclusive_write", 00:20:08.640 "zoned": false, 00:20:08.640 "supported_io_types": { 00:20:08.640 "read": true, 00:20:08.640 "write": true, 00:20:08.640 "unmap": true, 00:20:08.640 "flush": true, 00:20:08.640 "reset": true, 00:20:08.640 "nvme_admin": false, 00:20:08.640 "nvme_io": false, 00:20:08.640 "nvme_io_md": false, 00:20:08.640 "write_zeroes": true, 00:20:08.640 "zcopy": true, 00:20:08.640 "get_zone_info": false, 00:20:08.640 "zone_management": false, 00:20:08.640 "zone_append": false, 00:20:08.640 "compare": false, 00:20:08.640 "compare_and_write": false, 00:20:08.640 "abort": true, 00:20:08.640 "seek_hole": false, 00:20:08.640 "seek_data": false, 00:20:08.640 "copy": true, 00:20:08.640 "nvme_iov_md": false 00:20:08.640 }, 00:20:08.640 "memory_domains": [ 00:20:08.640 { 00:20:08.640 "dma_device_id": "system", 00:20:08.640 "dma_device_type": 1 00:20:08.640 }, 00:20:08.640 { 00:20:08.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.640 "dma_device_type": 2 00:20:08.640 } 00:20:08.640 ], 00:20:08.640 "driver_specific": {} 00:20:08.640 } 00:20:08.640 ] 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.640 23:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.900 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.900 "name": "Existed_Raid", 00:20:08.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.900 "strip_size_kb": 64, 00:20:08.900 "state": "configuring", 00:20:08.900 "raid_level": "raid0", 00:20:08.900 "superblock": false, 00:20:08.900 "num_base_bdevs": 3, 00:20:08.900 "num_base_bdevs_discovered": 2, 00:20:08.900 "num_base_bdevs_operational": 3, 00:20:08.900 "base_bdevs_list": [ 00:20:08.900 { 00:20:08.900 "name": "BaseBdev1", 00:20:08.900 "uuid": "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27", 00:20:08.900 "is_configured": true, 00:20:08.900 "data_offset": 0, 00:20:08.900 "data_size": 65536 00:20:08.900 }, 00:20:08.900 { 00:20:08.900 "name": "BaseBdev2", 00:20:08.900 "uuid": "5312c45a-ae99-4cbd-8e69-ee9fd4f0f84a", 00:20:08.900 "is_configured": true, 00:20:08.900 "data_offset": 0, 00:20:08.900 "data_size": 65536 00:20:08.900 }, 00:20:08.900 { 00:20:08.900 "name": "BaseBdev3", 00:20:08.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.900 "is_configured": false, 00:20:08.900 "data_offset": 0, 00:20:08.900 "data_size": 0 00:20:08.900 } 00:20:08.900 ] 00:20:08.900 }' 00:20:08.900 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.900 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.162 [2024-12-09 23:01:44.360148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:09.162 [2024-12-09 23:01:44.360211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:09.162 [2024-12-09 23:01:44.360229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:09.162 [2024-12-09 23:01:44.360555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:09.162 [2024-12-09 23:01:44.360741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:09.162 [2024-12-09 23:01:44.360750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:09.162 [2024-12-09 23:01:44.361082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.162 BaseBdev3 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.162 [ 00:20:09.162 { 00:20:09.162 "name": "BaseBdev3", 00:20:09.162 "aliases": [ 00:20:09.162 "ea1a95ba-0ac5-4cc2-b4c3-f736b562c44f" 00:20:09.162 ], 00:20:09.162 "product_name": "Malloc disk", 00:20:09.162 "block_size": 512, 00:20:09.162 "num_blocks": 65536, 00:20:09.162 "uuid": "ea1a95ba-0ac5-4cc2-b4c3-f736b562c44f", 00:20:09.162 "assigned_rate_limits": { 00:20:09.162 "rw_ios_per_sec": 0, 00:20:09.162 "rw_mbytes_per_sec": 0, 00:20:09.162 "r_mbytes_per_sec": 0, 00:20:09.162 "w_mbytes_per_sec": 0 00:20:09.162 }, 00:20:09.162 "claimed": true, 00:20:09.162 "claim_type": "exclusive_write", 00:20:09.162 "zoned": false, 00:20:09.162 "supported_io_types": { 00:20:09.162 "read": true, 00:20:09.162 "write": true, 00:20:09.162 "unmap": true, 00:20:09.162 "flush": true, 00:20:09.162 "reset": true, 00:20:09.162 "nvme_admin": false, 00:20:09.162 "nvme_io": false, 00:20:09.162 "nvme_io_md": false, 00:20:09.162 "write_zeroes": true, 00:20:09.162 "zcopy": true, 00:20:09.162 "get_zone_info": false, 00:20:09.162 "zone_management": false, 00:20:09.162 "zone_append": false, 00:20:09.162 "compare": false, 00:20:09.162 "compare_and_write": false, 00:20:09.162 "abort": true, 00:20:09.162 "seek_hole": false, 00:20:09.162 "seek_data": false, 00:20:09.162 "copy": true, 00:20:09.162 "nvme_iov_md": false 00:20:09.162 }, 00:20:09.162 "memory_domains": [ 00:20:09.162 { 00:20:09.162 "dma_device_id": "system", 00:20:09.162 "dma_device_type": 1 00:20:09.162 }, 00:20:09.162 { 00:20:09.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.162 "dma_device_type": 2 00:20:09.162 } 00:20:09.162 ], 00:20:09.162 "driver_specific": {} 00:20:09.162 } 00:20:09.162 ] 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.162 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.163 "name": "Existed_Raid", 00:20:09.163 "uuid": "844bfc7d-ecfd-4f16-8d26-8830ac5ea20e", 00:20:09.163 "strip_size_kb": 64, 00:20:09.163 "state": "online", 00:20:09.163 "raid_level": "raid0", 00:20:09.163 "superblock": false, 00:20:09.163 "num_base_bdevs": 3, 00:20:09.163 "num_base_bdevs_discovered": 3, 00:20:09.163 "num_base_bdevs_operational": 3, 00:20:09.163 "base_bdevs_list": [ 00:20:09.163 { 00:20:09.163 "name": "BaseBdev1", 00:20:09.163 "uuid": "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27", 00:20:09.163 "is_configured": true, 00:20:09.163 "data_offset": 0, 00:20:09.163 "data_size": 65536 00:20:09.163 }, 00:20:09.163 { 00:20:09.163 "name": "BaseBdev2", 00:20:09.163 "uuid": "5312c45a-ae99-4cbd-8e69-ee9fd4f0f84a", 00:20:09.163 "is_configured": true, 00:20:09.163 "data_offset": 0, 00:20:09.163 "data_size": 65536 00:20:09.163 }, 00:20:09.163 { 00:20:09.163 "name": "BaseBdev3", 00:20:09.163 "uuid": "ea1a95ba-0ac5-4cc2-b4c3-f736b562c44f", 00:20:09.163 "is_configured": true, 00:20:09.163 "data_offset": 0, 00:20:09.163 "data_size": 65536 00:20:09.163 } 00:20:09.163 ] 00:20:09.163 }' 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.163 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.424 [2024-12-09 23:01:44.744739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.424 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:09.424 "name": "Existed_Raid", 00:20:09.424 "aliases": [ 00:20:09.424 "844bfc7d-ecfd-4f16-8d26-8830ac5ea20e" 00:20:09.424 ], 00:20:09.424 "product_name": "Raid Volume", 00:20:09.424 "block_size": 512, 00:20:09.424 "num_blocks": 196608, 00:20:09.424 "uuid": "844bfc7d-ecfd-4f16-8d26-8830ac5ea20e", 00:20:09.424 "assigned_rate_limits": { 00:20:09.424 "rw_ios_per_sec": 0, 00:20:09.424 "rw_mbytes_per_sec": 0, 00:20:09.424 "r_mbytes_per_sec": 0, 00:20:09.424 "w_mbytes_per_sec": 0 00:20:09.424 }, 00:20:09.424 "claimed": false, 00:20:09.424 "zoned": false, 00:20:09.424 "supported_io_types": { 00:20:09.424 "read": true, 00:20:09.424 "write": true, 00:20:09.424 "unmap": true, 00:20:09.424 "flush": true, 00:20:09.424 "reset": true, 00:20:09.424 "nvme_admin": false, 00:20:09.424 "nvme_io": false, 00:20:09.424 "nvme_io_md": false, 00:20:09.424 "write_zeroes": true, 00:20:09.424 "zcopy": false, 00:20:09.424 "get_zone_info": false, 00:20:09.424 "zone_management": false, 00:20:09.424 "zone_append": false, 00:20:09.424 "compare": false, 00:20:09.424 "compare_and_write": false, 00:20:09.424 "abort": false, 00:20:09.424 "seek_hole": false, 00:20:09.424 "seek_data": false, 00:20:09.424 "copy": false, 00:20:09.424 "nvme_iov_md": false 00:20:09.424 }, 00:20:09.424 "memory_domains": [ 00:20:09.424 { 00:20:09.424 "dma_device_id": "system", 00:20:09.424 "dma_device_type": 1 00:20:09.424 }, 00:20:09.424 { 00:20:09.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.425 "dma_device_type": 2 00:20:09.425 }, 00:20:09.425 { 00:20:09.425 "dma_device_id": "system", 00:20:09.425 "dma_device_type": 1 00:20:09.425 }, 00:20:09.425 { 00:20:09.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.425 "dma_device_type": 2 00:20:09.425 }, 00:20:09.425 { 00:20:09.425 "dma_device_id": "system", 00:20:09.425 "dma_device_type": 1 00:20:09.425 }, 00:20:09.425 { 00:20:09.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.425 "dma_device_type": 2 00:20:09.425 } 00:20:09.425 ], 00:20:09.425 "driver_specific": { 00:20:09.425 "raid": { 00:20:09.425 "uuid": "844bfc7d-ecfd-4f16-8d26-8830ac5ea20e", 00:20:09.425 "strip_size_kb": 64, 00:20:09.425 "state": "online", 00:20:09.425 "raid_level": "raid0", 00:20:09.425 "superblock": false, 00:20:09.425 "num_base_bdevs": 3, 00:20:09.425 "num_base_bdevs_discovered": 3, 00:20:09.425 "num_base_bdevs_operational": 3, 00:20:09.425 "base_bdevs_list": [ 00:20:09.425 { 00:20:09.425 "name": "BaseBdev1", 00:20:09.425 "uuid": "a3405cfb-c7f8-47d6-a88e-132b5e6fdc27", 00:20:09.425 "is_configured": true, 00:20:09.425 "data_offset": 0, 00:20:09.425 "data_size": 65536 00:20:09.425 }, 00:20:09.425 { 00:20:09.425 "name": "BaseBdev2", 00:20:09.425 "uuid": "5312c45a-ae99-4cbd-8e69-ee9fd4f0f84a", 00:20:09.425 "is_configured": true, 00:20:09.425 "data_offset": 0, 00:20:09.425 "data_size": 65536 00:20:09.425 }, 00:20:09.425 { 00:20:09.425 "name": "BaseBdev3", 00:20:09.425 "uuid": "ea1a95ba-0ac5-4cc2-b4c3-f736b562c44f", 00:20:09.425 "is_configured": true, 00:20:09.425 "data_offset": 0, 00:20:09.425 "data_size": 65536 00:20:09.425 } 00:20:09.425 ] 00:20:09.425 } 00:20:09.425 } 00:20:09.425 }' 00:20:09.425 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:09.685 BaseBdev2 00:20:09.685 BaseBdev3' 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:09.685 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.686 23:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.686 [2024-12-09 23:01:44.960491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:09.686 [2024-12-09 23:01:44.960672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.686 [2024-12-09 23:01:44.960816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.953 "name": "Existed_Raid", 00:20:09.953 "uuid": "844bfc7d-ecfd-4f16-8d26-8830ac5ea20e", 00:20:09.953 "strip_size_kb": 64, 00:20:09.953 "state": "offline", 00:20:09.953 "raid_level": "raid0", 00:20:09.953 "superblock": false, 00:20:09.953 "num_base_bdevs": 3, 00:20:09.953 "num_base_bdevs_discovered": 2, 00:20:09.953 "num_base_bdevs_operational": 2, 00:20:09.953 "base_bdevs_list": [ 00:20:09.953 { 00:20:09.953 "name": null, 00:20:09.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.953 "is_configured": false, 00:20:09.953 "data_offset": 0, 00:20:09.953 "data_size": 65536 00:20:09.953 }, 00:20:09.953 { 00:20:09.953 "name": "BaseBdev2", 00:20:09.953 "uuid": "5312c45a-ae99-4cbd-8e69-ee9fd4f0f84a", 00:20:09.953 "is_configured": true, 00:20:09.953 "data_offset": 0, 00:20:09.953 "data_size": 65536 00:20:09.953 }, 00:20:09.953 { 00:20:09.953 "name": "BaseBdev3", 00:20:09.953 "uuid": "ea1a95ba-0ac5-4cc2-b4c3-f736b562c44f", 00:20:09.953 "is_configured": true, 00:20:09.953 "data_offset": 0, 00:20:09.953 "data_size": 65536 00:20:09.953 } 00:20:09.953 ] 00:20:09.953 }' 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.953 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.213 [2024-12-09 23:01:45.422544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.213 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.213 [2024-12-09 23:01:45.529308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:10.213 [2024-12-09 23:01:45.529518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 BaseBdev2 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.474 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.474 [ 00:20:10.474 { 00:20:10.474 "name": "BaseBdev2", 00:20:10.474 "aliases": [ 00:20:10.474 "558c62ab-a65c-4a09-b36f-0cd9333a152b" 00:20:10.474 ], 00:20:10.474 "product_name": "Malloc disk", 00:20:10.474 "block_size": 512, 00:20:10.475 "num_blocks": 65536, 00:20:10.475 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:10.475 "assigned_rate_limits": { 00:20:10.475 "rw_ios_per_sec": 0, 00:20:10.475 "rw_mbytes_per_sec": 0, 00:20:10.475 "r_mbytes_per_sec": 0, 00:20:10.475 "w_mbytes_per_sec": 0 00:20:10.475 }, 00:20:10.475 "claimed": false, 00:20:10.475 "zoned": false, 00:20:10.475 "supported_io_types": { 00:20:10.475 "read": true, 00:20:10.475 "write": true, 00:20:10.475 "unmap": true, 00:20:10.475 "flush": true, 00:20:10.475 "reset": true, 00:20:10.475 "nvme_admin": false, 00:20:10.475 "nvme_io": false, 00:20:10.475 "nvme_io_md": false, 00:20:10.475 "write_zeroes": true, 00:20:10.475 "zcopy": true, 00:20:10.475 "get_zone_info": false, 00:20:10.475 "zone_management": false, 00:20:10.475 "zone_append": false, 00:20:10.475 "compare": false, 00:20:10.475 "compare_and_write": false, 00:20:10.475 "abort": true, 00:20:10.475 "seek_hole": false, 00:20:10.475 "seek_data": false, 00:20:10.475 "copy": true, 00:20:10.475 "nvme_iov_md": false 00:20:10.475 }, 00:20:10.475 "memory_domains": [ 00:20:10.475 { 00:20:10.475 "dma_device_id": "system", 00:20:10.475 "dma_device_type": 1 00:20:10.475 }, 00:20:10.475 { 00:20:10.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.475 "dma_device_type": 2 00:20:10.475 } 00:20:10.475 ], 00:20:10.475 "driver_specific": {} 00:20:10.475 } 00:20:10.475 ] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.475 BaseBdev3 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.475 [ 00:20:10.475 { 00:20:10.475 "name": "BaseBdev3", 00:20:10.475 "aliases": [ 00:20:10.475 "c7cb6341-2161-432d-8513-f1363464139c" 00:20:10.475 ], 00:20:10.475 "product_name": "Malloc disk", 00:20:10.475 "block_size": 512, 00:20:10.475 "num_blocks": 65536, 00:20:10.475 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:10.475 "assigned_rate_limits": { 00:20:10.475 "rw_ios_per_sec": 0, 00:20:10.475 "rw_mbytes_per_sec": 0, 00:20:10.475 "r_mbytes_per_sec": 0, 00:20:10.475 "w_mbytes_per_sec": 0 00:20:10.475 }, 00:20:10.475 "claimed": false, 00:20:10.475 "zoned": false, 00:20:10.475 "supported_io_types": { 00:20:10.475 "read": true, 00:20:10.475 "write": true, 00:20:10.475 "unmap": true, 00:20:10.475 "flush": true, 00:20:10.475 "reset": true, 00:20:10.475 "nvme_admin": false, 00:20:10.475 "nvme_io": false, 00:20:10.475 "nvme_io_md": false, 00:20:10.475 "write_zeroes": true, 00:20:10.475 "zcopy": true, 00:20:10.475 "get_zone_info": false, 00:20:10.475 "zone_management": false, 00:20:10.475 "zone_append": false, 00:20:10.475 "compare": false, 00:20:10.475 "compare_and_write": false, 00:20:10.475 "abort": true, 00:20:10.475 "seek_hole": false, 00:20:10.475 "seek_data": false, 00:20:10.475 "copy": true, 00:20:10.475 "nvme_iov_md": false 00:20:10.475 }, 00:20:10.475 "memory_domains": [ 00:20:10.475 { 00:20:10.475 "dma_device_id": "system", 00:20:10.475 "dma_device_type": 1 00:20:10.475 }, 00:20:10.475 { 00:20:10.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.475 "dma_device_type": 2 00:20:10.475 } 00:20:10.475 ], 00:20:10.475 "driver_specific": {} 00:20:10.475 } 00:20:10.475 ] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.475 [2024-12-09 23:01:45.751591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.475 [2024-12-09 23:01:45.751790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.475 [2024-12-09 23:01:45.751828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.475 [2024-12-09 23:01:45.753997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.475 "name": "Existed_Raid", 00:20:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.475 "strip_size_kb": 64, 00:20:10.475 "state": "configuring", 00:20:10.475 "raid_level": "raid0", 00:20:10.475 "superblock": false, 00:20:10.475 "num_base_bdevs": 3, 00:20:10.475 "num_base_bdevs_discovered": 2, 00:20:10.475 "num_base_bdevs_operational": 3, 00:20:10.475 "base_bdevs_list": [ 00:20:10.475 { 00:20:10.475 "name": "BaseBdev1", 00:20:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.475 "is_configured": false, 00:20:10.475 "data_offset": 0, 00:20:10.475 "data_size": 0 00:20:10.475 }, 00:20:10.475 { 00:20:10.475 "name": "BaseBdev2", 00:20:10.475 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:10.475 "is_configured": true, 00:20:10.475 "data_offset": 0, 00:20:10.475 "data_size": 65536 00:20:10.475 }, 00:20:10.475 { 00:20:10.475 "name": "BaseBdev3", 00:20:10.475 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:10.475 "is_configured": true, 00:20:10.475 "data_offset": 0, 00:20:10.475 "data_size": 65536 00:20:10.475 } 00:20:10.475 ] 00:20:10.475 }' 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.475 23:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.735 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:10.735 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.736 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.736 [2024-12-09 23:01:46.095690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.997 "name": "Existed_Raid", 00:20:10.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.997 "strip_size_kb": 64, 00:20:10.997 "state": "configuring", 00:20:10.997 "raid_level": "raid0", 00:20:10.997 "superblock": false, 00:20:10.997 "num_base_bdevs": 3, 00:20:10.997 "num_base_bdevs_discovered": 1, 00:20:10.997 "num_base_bdevs_operational": 3, 00:20:10.997 "base_bdevs_list": [ 00:20:10.997 { 00:20:10.997 "name": "BaseBdev1", 00:20:10.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.997 "is_configured": false, 00:20:10.997 "data_offset": 0, 00:20:10.997 "data_size": 0 00:20:10.997 }, 00:20:10.997 { 00:20:10.997 "name": null, 00:20:10.997 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:10.997 "is_configured": false, 00:20:10.997 "data_offset": 0, 00:20:10.997 "data_size": 65536 00:20:10.997 }, 00:20:10.997 { 00:20:10.997 "name": "BaseBdev3", 00:20:10.997 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:10.997 "is_configured": true, 00:20:10.997 "data_offset": 0, 00:20:10.997 "data_size": 65536 00:20:10.997 } 00:20:10.997 ] 00:20:10.997 }' 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.997 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.258 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:11.258 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.258 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.258 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.258 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.259 [2024-12-09 23:01:46.511259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.259 BaseBdev1 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.259 [ 00:20:11.259 { 00:20:11.259 "name": "BaseBdev1", 00:20:11.259 "aliases": [ 00:20:11.259 "a6e5e32f-366c-4a2e-81d6-78daa8feebde" 00:20:11.259 ], 00:20:11.259 "product_name": "Malloc disk", 00:20:11.259 "block_size": 512, 00:20:11.259 "num_blocks": 65536, 00:20:11.259 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:11.259 "assigned_rate_limits": { 00:20:11.259 "rw_ios_per_sec": 0, 00:20:11.259 "rw_mbytes_per_sec": 0, 00:20:11.259 "r_mbytes_per_sec": 0, 00:20:11.259 "w_mbytes_per_sec": 0 00:20:11.259 }, 00:20:11.259 "claimed": true, 00:20:11.259 "claim_type": "exclusive_write", 00:20:11.259 "zoned": false, 00:20:11.259 "supported_io_types": { 00:20:11.259 "read": true, 00:20:11.259 "write": true, 00:20:11.259 "unmap": true, 00:20:11.259 "flush": true, 00:20:11.259 "reset": true, 00:20:11.259 "nvme_admin": false, 00:20:11.259 "nvme_io": false, 00:20:11.259 "nvme_io_md": false, 00:20:11.259 "write_zeroes": true, 00:20:11.259 "zcopy": true, 00:20:11.259 "get_zone_info": false, 00:20:11.259 "zone_management": false, 00:20:11.259 "zone_append": false, 00:20:11.259 "compare": false, 00:20:11.259 "compare_and_write": false, 00:20:11.259 "abort": true, 00:20:11.259 "seek_hole": false, 00:20:11.259 "seek_data": false, 00:20:11.259 "copy": true, 00:20:11.259 "nvme_iov_md": false 00:20:11.259 }, 00:20:11.259 "memory_domains": [ 00:20:11.259 { 00:20:11.259 "dma_device_id": "system", 00:20:11.259 "dma_device_type": 1 00:20:11.259 }, 00:20:11.259 { 00:20:11.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.259 "dma_device_type": 2 00:20:11.259 } 00:20:11.259 ], 00:20:11.259 "driver_specific": {} 00:20:11.259 } 00:20:11.259 ] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.259 "name": "Existed_Raid", 00:20:11.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.259 "strip_size_kb": 64, 00:20:11.259 "state": "configuring", 00:20:11.259 "raid_level": "raid0", 00:20:11.259 "superblock": false, 00:20:11.259 "num_base_bdevs": 3, 00:20:11.259 "num_base_bdevs_discovered": 2, 00:20:11.259 "num_base_bdevs_operational": 3, 00:20:11.259 "base_bdevs_list": [ 00:20:11.259 { 00:20:11.259 "name": "BaseBdev1", 00:20:11.259 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:11.259 "is_configured": true, 00:20:11.259 "data_offset": 0, 00:20:11.259 "data_size": 65536 00:20:11.259 }, 00:20:11.259 { 00:20:11.259 "name": null, 00:20:11.259 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:11.259 "is_configured": false, 00:20:11.259 "data_offset": 0, 00:20:11.259 "data_size": 65536 00:20:11.259 }, 00:20:11.259 { 00:20:11.259 "name": "BaseBdev3", 00:20:11.259 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:11.259 "is_configured": true, 00:20:11.259 "data_offset": 0, 00:20:11.259 "data_size": 65536 00:20:11.259 } 00:20:11.259 ] 00:20:11.259 }' 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.259 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.521 [2024-12-09 23:01:46.867411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.521 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.786 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.786 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.786 "name": "Existed_Raid", 00:20:11.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.786 "strip_size_kb": 64, 00:20:11.786 "state": "configuring", 00:20:11.786 "raid_level": "raid0", 00:20:11.786 "superblock": false, 00:20:11.786 "num_base_bdevs": 3, 00:20:11.786 "num_base_bdevs_discovered": 1, 00:20:11.786 "num_base_bdevs_operational": 3, 00:20:11.786 "base_bdevs_list": [ 00:20:11.786 { 00:20:11.786 "name": "BaseBdev1", 00:20:11.786 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:11.786 "is_configured": true, 00:20:11.786 "data_offset": 0, 00:20:11.786 "data_size": 65536 00:20:11.786 }, 00:20:11.786 { 00:20:11.786 "name": null, 00:20:11.786 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:11.786 "is_configured": false, 00:20:11.786 "data_offset": 0, 00:20:11.786 "data_size": 65536 00:20:11.786 }, 00:20:11.786 { 00:20:11.786 "name": null, 00:20:11.786 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:11.786 "is_configured": false, 00:20:11.786 "data_offset": 0, 00:20:11.786 "data_size": 65536 00:20:11.786 } 00:20:11.786 ] 00:20:11.786 }' 00:20:11.786 23:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.786 23:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.072 [2024-12-09 23:01:47.239543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.072 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.072 "name": "Existed_Raid", 00:20:12.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.072 "strip_size_kb": 64, 00:20:12.072 "state": "configuring", 00:20:12.072 "raid_level": "raid0", 00:20:12.072 "superblock": false, 00:20:12.072 "num_base_bdevs": 3, 00:20:12.072 "num_base_bdevs_discovered": 2, 00:20:12.072 "num_base_bdevs_operational": 3, 00:20:12.072 "base_bdevs_list": [ 00:20:12.072 { 00:20:12.072 "name": "BaseBdev1", 00:20:12.072 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:12.072 "is_configured": true, 00:20:12.072 "data_offset": 0, 00:20:12.072 "data_size": 65536 00:20:12.072 }, 00:20:12.072 { 00:20:12.072 "name": null, 00:20:12.072 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:12.072 "is_configured": false, 00:20:12.072 "data_offset": 0, 00:20:12.072 "data_size": 65536 00:20:12.072 }, 00:20:12.072 { 00:20:12.072 "name": "BaseBdev3", 00:20:12.072 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:12.072 "is_configured": true, 00:20:12.072 "data_offset": 0, 00:20:12.072 "data_size": 65536 00:20:12.072 } 00:20:12.072 ] 00:20:12.072 }' 00:20:12.073 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.073 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.333 [2024-12-09 23:01:47.611680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.333 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.595 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.595 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.595 "name": "Existed_Raid", 00:20:12.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.595 "strip_size_kb": 64, 00:20:12.595 "state": "configuring", 00:20:12.595 "raid_level": "raid0", 00:20:12.595 "superblock": false, 00:20:12.595 "num_base_bdevs": 3, 00:20:12.595 "num_base_bdevs_discovered": 1, 00:20:12.595 "num_base_bdevs_operational": 3, 00:20:12.595 "base_bdevs_list": [ 00:20:12.595 { 00:20:12.595 "name": null, 00:20:12.595 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:12.595 "is_configured": false, 00:20:12.595 "data_offset": 0, 00:20:12.595 "data_size": 65536 00:20:12.595 }, 00:20:12.595 { 00:20:12.596 "name": null, 00:20:12.596 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:12.596 "is_configured": false, 00:20:12.596 "data_offset": 0, 00:20:12.596 "data_size": 65536 00:20:12.596 }, 00:20:12.596 { 00:20:12.596 "name": "BaseBdev3", 00:20:12.596 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:12.596 "is_configured": true, 00:20:12.596 "data_offset": 0, 00:20:12.596 "data_size": 65536 00:20:12.596 } 00:20:12.596 ] 00:20:12.596 }' 00:20:12.596 23:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.596 23:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 [2024-12-09 23:01:48.046522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.857 "name": "Existed_Raid", 00:20:12.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.857 "strip_size_kb": 64, 00:20:12.857 "state": "configuring", 00:20:12.857 "raid_level": "raid0", 00:20:12.857 "superblock": false, 00:20:12.857 "num_base_bdevs": 3, 00:20:12.857 "num_base_bdevs_discovered": 2, 00:20:12.857 "num_base_bdevs_operational": 3, 00:20:12.857 "base_bdevs_list": [ 00:20:12.857 { 00:20:12.857 "name": null, 00:20:12.857 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:12.857 "is_configured": false, 00:20:12.857 "data_offset": 0, 00:20:12.857 "data_size": 65536 00:20:12.857 }, 00:20:12.857 { 00:20:12.857 "name": "BaseBdev2", 00:20:12.857 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:12.857 "is_configured": true, 00:20:12.857 "data_offset": 0, 00:20:12.857 "data_size": 65536 00:20:12.857 }, 00:20:12.857 { 00:20:12.857 "name": "BaseBdev3", 00:20:12.857 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:12.857 "is_configured": true, 00:20:12.857 "data_offset": 0, 00:20:12.857 "data_size": 65536 00:20:12.857 } 00:20:12.857 ] 00:20:12.857 }' 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.857 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.118 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6e5e32f-366c-4a2e-81d6-78daa8feebde 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.119 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.382 [2024-12-09 23:01:48.489914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:13.382 [2024-12-09 23:01:48.489968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:13.382 [2024-12-09 23:01:48.489980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:13.382 [2024-12-09 23:01:48.490296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:13.382 [2024-12-09 23:01:48.490460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:13.382 [2024-12-09 23:01:48.490469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:13.382 [2024-12-09 23:01:48.490739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.382 NewBaseBdev 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.382 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.382 [ 00:20:13.382 { 00:20:13.382 "name": "NewBaseBdev", 00:20:13.382 "aliases": [ 00:20:13.383 "a6e5e32f-366c-4a2e-81d6-78daa8feebde" 00:20:13.383 ], 00:20:13.383 "product_name": "Malloc disk", 00:20:13.383 "block_size": 512, 00:20:13.383 "num_blocks": 65536, 00:20:13.383 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:13.383 "assigned_rate_limits": { 00:20:13.383 "rw_ios_per_sec": 0, 00:20:13.383 "rw_mbytes_per_sec": 0, 00:20:13.383 "r_mbytes_per_sec": 0, 00:20:13.383 "w_mbytes_per_sec": 0 00:20:13.383 }, 00:20:13.383 "claimed": true, 00:20:13.383 "claim_type": "exclusive_write", 00:20:13.383 "zoned": false, 00:20:13.383 "supported_io_types": { 00:20:13.383 "read": true, 00:20:13.383 "write": true, 00:20:13.383 "unmap": true, 00:20:13.383 "flush": true, 00:20:13.383 "reset": true, 00:20:13.383 "nvme_admin": false, 00:20:13.383 "nvme_io": false, 00:20:13.383 "nvme_io_md": false, 00:20:13.383 "write_zeroes": true, 00:20:13.383 "zcopy": true, 00:20:13.383 "get_zone_info": false, 00:20:13.383 "zone_management": false, 00:20:13.383 "zone_append": false, 00:20:13.383 "compare": false, 00:20:13.383 "compare_and_write": false, 00:20:13.383 "abort": true, 00:20:13.383 "seek_hole": false, 00:20:13.383 "seek_data": false, 00:20:13.383 "copy": true, 00:20:13.383 "nvme_iov_md": false 00:20:13.383 }, 00:20:13.383 "memory_domains": [ 00:20:13.383 { 00:20:13.383 "dma_device_id": "system", 00:20:13.383 "dma_device_type": 1 00:20:13.383 }, 00:20:13.383 { 00:20:13.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.383 "dma_device_type": 2 00:20:13.383 } 00:20:13.383 ], 00:20:13.383 "driver_specific": {} 00:20:13.383 } 00:20:13.383 ] 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.383 "name": "Existed_Raid", 00:20:13.383 "uuid": "0a80f343-2a2a-446f-8962-aebd712f0ad5", 00:20:13.383 "strip_size_kb": 64, 00:20:13.383 "state": "online", 00:20:13.383 "raid_level": "raid0", 00:20:13.383 "superblock": false, 00:20:13.383 "num_base_bdevs": 3, 00:20:13.383 "num_base_bdevs_discovered": 3, 00:20:13.383 "num_base_bdevs_operational": 3, 00:20:13.383 "base_bdevs_list": [ 00:20:13.383 { 00:20:13.383 "name": "NewBaseBdev", 00:20:13.383 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:13.383 "is_configured": true, 00:20:13.383 "data_offset": 0, 00:20:13.383 "data_size": 65536 00:20:13.383 }, 00:20:13.383 { 00:20:13.383 "name": "BaseBdev2", 00:20:13.383 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:13.383 "is_configured": true, 00:20:13.383 "data_offset": 0, 00:20:13.383 "data_size": 65536 00:20:13.383 }, 00:20:13.383 { 00:20:13.383 "name": "BaseBdev3", 00:20:13.383 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:13.383 "is_configured": true, 00:20:13.383 "data_offset": 0, 00:20:13.383 "data_size": 65536 00:20:13.383 } 00:20:13.383 ] 00:20:13.383 }' 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.383 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.645 [2024-12-09 23:01:48.826423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.645 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:13.645 "name": "Existed_Raid", 00:20:13.645 "aliases": [ 00:20:13.645 "0a80f343-2a2a-446f-8962-aebd712f0ad5" 00:20:13.645 ], 00:20:13.645 "product_name": "Raid Volume", 00:20:13.645 "block_size": 512, 00:20:13.645 "num_blocks": 196608, 00:20:13.645 "uuid": "0a80f343-2a2a-446f-8962-aebd712f0ad5", 00:20:13.645 "assigned_rate_limits": { 00:20:13.645 "rw_ios_per_sec": 0, 00:20:13.645 "rw_mbytes_per_sec": 0, 00:20:13.645 "r_mbytes_per_sec": 0, 00:20:13.645 "w_mbytes_per_sec": 0 00:20:13.645 }, 00:20:13.645 "claimed": false, 00:20:13.646 "zoned": false, 00:20:13.646 "supported_io_types": { 00:20:13.646 "read": true, 00:20:13.646 "write": true, 00:20:13.646 "unmap": true, 00:20:13.646 "flush": true, 00:20:13.646 "reset": true, 00:20:13.646 "nvme_admin": false, 00:20:13.646 "nvme_io": false, 00:20:13.646 "nvme_io_md": false, 00:20:13.646 "write_zeroes": true, 00:20:13.646 "zcopy": false, 00:20:13.646 "get_zone_info": false, 00:20:13.646 "zone_management": false, 00:20:13.646 "zone_append": false, 00:20:13.646 "compare": false, 00:20:13.646 "compare_and_write": false, 00:20:13.646 "abort": false, 00:20:13.646 "seek_hole": false, 00:20:13.646 "seek_data": false, 00:20:13.646 "copy": false, 00:20:13.646 "nvme_iov_md": false 00:20:13.646 }, 00:20:13.646 "memory_domains": [ 00:20:13.646 { 00:20:13.646 "dma_device_id": "system", 00:20:13.646 "dma_device_type": 1 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.646 "dma_device_type": 2 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "dma_device_id": "system", 00:20:13.646 "dma_device_type": 1 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.646 "dma_device_type": 2 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "dma_device_id": "system", 00:20:13.646 "dma_device_type": 1 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.646 "dma_device_type": 2 00:20:13.646 } 00:20:13.646 ], 00:20:13.646 "driver_specific": { 00:20:13.646 "raid": { 00:20:13.646 "uuid": "0a80f343-2a2a-446f-8962-aebd712f0ad5", 00:20:13.646 "strip_size_kb": 64, 00:20:13.646 "state": "online", 00:20:13.646 "raid_level": "raid0", 00:20:13.646 "superblock": false, 00:20:13.646 "num_base_bdevs": 3, 00:20:13.646 "num_base_bdevs_discovered": 3, 00:20:13.646 "num_base_bdevs_operational": 3, 00:20:13.646 "base_bdevs_list": [ 00:20:13.646 { 00:20:13.646 "name": "NewBaseBdev", 00:20:13.646 "uuid": "a6e5e32f-366c-4a2e-81d6-78daa8feebde", 00:20:13.646 "is_configured": true, 00:20:13.646 "data_offset": 0, 00:20:13.646 "data_size": 65536 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "name": "BaseBdev2", 00:20:13.646 "uuid": "558c62ab-a65c-4a09-b36f-0cd9333a152b", 00:20:13.646 "is_configured": true, 00:20:13.646 "data_offset": 0, 00:20:13.646 "data_size": 65536 00:20:13.646 }, 00:20:13.646 { 00:20:13.646 "name": "BaseBdev3", 00:20:13.646 "uuid": "c7cb6341-2161-432d-8513-f1363464139c", 00:20:13.646 "is_configured": true, 00:20:13.646 "data_offset": 0, 00:20:13.646 "data_size": 65536 00:20:13.646 } 00:20:13.646 ] 00:20:13.646 } 00:20:13.646 } 00:20:13.646 }' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:13.646 BaseBdev2 00:20:13.646 BaseBdev3' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:13.646 23:01:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.905 [2024-12-09 23:01:49.014089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:13.905 [2024-12-09 23:01:49.014280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.905 [2024-12-09 23:01:49.014441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.905 [2024-12-09 23:01:49.014531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.905 [2024-12-09 23:01:49.014573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62326 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62326 ']' 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62326 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62326 00:20:13.905 killing process with pid 62326 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62326' 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62326 00:20:13.905 [2024-12-09 23:01:49.047095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:13.905 23:01:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62326 00:20:13.905 [2024-12-09 23:01:49.258842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:14.844 ************************************ 00:20:14.844 END TEST raid_state_function_test 00:20:14.844 ************************************ 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:14.844 00:20:14.844 real 0m8.205s 00:20:14.844 user 0m12.720s 00:20:14.844 sys 0m1.524s 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.844 23:01:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:20:14.844 23:01:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:14.844 23:01:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.844 23:01:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.844 ************************************ 00:20:14.844 START TEST raid_state_function_test_sb 00:20:14.844 ************************************ 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:14.844 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62925 00:20:14.845 Process raid pid: 62925 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62925' 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62925 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62925 ']' 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.845 23:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.106 [2024-12-09 23:01:50.234792] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:15.106 [2024-12-09 23:01:50.235176] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.106 [2024-12-09 23:01:50.399704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.365 [2024-12-09 23:01:50.550411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.624 [2024-12-09 23:01:50.727548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.624 [2024-12-09 23:01:50.727611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.883 [2024-12-09 23:01:51.109981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.883 [2024-12-09 23:01:51.110374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.883 [2024-12-09 23:01:51.110609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:15.883 [2024-12-09 23:01:51.110667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:15.883 [2024-12-09 23:01:51.110687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:15.883 [2024-12-09 23:01:51.110709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.883 "name": "Existed_Raid", 00:20:15.883 "uuid": "9b0712fa-8c1c-4735-9b8c-f17cc5918794", 00:20:15.883 "strip_size_kb": 64, 00:20:15.883 "state": "configuring", 00:20:15.883 "raid_level": "raid0", 00:20:15.883 "superblock": true, 00:20:15.883 "num_base_bdevs": 3, 00:20:15.883 "num_base_bdevs_discovered": 0, 00:20:15.883 "num_base_bdevs_operational": 3, 00:20:15.883 "base_bdevs_list": [ 00:20:15.883 { 00:20:15.883 "name": "BaseBdev1", 00:20:15.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.883 "is_configured": false, 00:20:15.883 "data_offset": 0, 00:20:15.883 "data_size": 0 00:20:15.883 }, 00:20:15.883 { 00:20:15.883 "name": "BaseBdev2", 00:20:15.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.883 "is_configured": false, 00:20:15.883 "data_offset": 0, 00:20:15.883 "data_size": 0 00:20:15.883 }, 00:20:15.883 { 00:20:15.883 "name": "BaseBdev3", 00:20:15.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.883 "is_configured": false, 00:20:15.883 "data_offset": 0, 00:20:15.883 "data_size": 0 00:20:15.883 } 00:20:15.883 ] 00:20:15.883 }' 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.883 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.143 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:16.143 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.143 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.143 [2024-12-09 23:01:51.450018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.143 [2024-12-09 23:01:51.450256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:16.143 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.143 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.144 [2024-12-09 23:01:51.458043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:16.144 [2024-12-09 23:01:51.458401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:16.144 [2024-12-09 23:01:51.458494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.144 [2024-12-09 23:01:51.458529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.144 [2024-12-09 23:01:51.458550] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:16.144 [2024-12-09 23:01:51.458577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.144 BaseBdev1 00:20:16.144 [2024-12-09 23:01:51.497273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.144 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.406 [ 00:20:16.406 { 00:20:16.406 "name": "BaseBdev1", 00:20:16.406 "aliases": [ 00:20:16.406 "4acdb98a-2861-4278-a3ec-909cee18fefe" 00:20:16.406 ], 00:20:16.406 "product_name": "Malloc disk", 00:20:16.406 "block_size": 512, 00:20:16.406 "num_blocks": 65536, 00:20:16.406 "uuid": "4acdb98a-2861-4278-a3ec-909cee18fefe", 00:20:16.406 "assigned_rate_limits": { 00:20:16.406 "rw_ios_per_sec": 0, 00:20:16.406 "rw_mbytes_per_sec": 0, 00:20:16.406 "r_mbytes_per_sec": 0, 00:20:16.406 "w_mbytes_per_sec": 0 00:20:16.406 }, 00:20:16.406 "claimed": true, 00:20:16.406 "claim_type": "exclusive_write", 00:20:16.406 "zoned": false, 00:20:16.406 "supported_io_types": { 00:20:16.406 "read": true, 00:20:16.406 "write": true, 00:20:16.406 "unmap": true, 00:20:16.406 "flush": true, 00:20:16.406 "reset": true, 00:20:16.406 "nvme_admin": false, 00:20:16.406 "nvme_io": false, 00:20:16.406 "nvme_io_md": false, 00:20:16.406 "write_zeroes": true, 00:20:16.406 "zcopy": true, 00:20:16.406 "get_zone_info": false, 00:20:16.406 "zone_management": false, 00:20:16.406 "zone_append": false, 00:20:16.406 "compare": false, 00:20:16.406 "compare_and_write": false, 00:20:16.406 "abort": true, 00:20:16.406 "seek_hole": false, 00:20:16.406 "seek_data": false, 00:20:16.406 "copy": true, 00:20:16.406 "nvme_iov_md": false 00:20:16.406 }, 00:20:16.406 "memory_domains": [ 00:20:16.406 { 00:20:16.406 "dma_device_id": "system", 00:20:16.406 "dma_device_type": 1 00:20:16.406 }, 00:20:16.406 { 00:20:16.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.406 "dma_device_type": 2 00:20:16.406 } 00:20:16.406 ], 00:20:16.406 "driver_specific": {} 00:20:16.406 } 00:20:16.406 ] 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.406 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.406 "name": "Existed_Raid", 00:20:16.406 "uuid": "b72b8434-d09a-4ffb-a85c-fa6d48b2b10d", 00:20:16.406 "strip_size_kb": 64, 00:20:16.406 "state": "configuring", 00:20:16.406 "raid_level": "raid0", 00:20:16.406 "superblock": true, 00:20:16.406 "num_base_bdevs": 3, 00:20:16.406 "num_base_bdevs_discovered": 1, 00:20:16.406 "num_base_bdevs_operational": 3, 00:20:16.406 "base_bdevs_list": [ 00:20:16.406 { 00:20:16.406 "name": "BaseBdev1", 00:20:16.407 "uuid": "4acdb98a-2861-4278-a3ec-909cee18fefe", 00:20:16.407 "is_configured": true, 00:20:16.407 "data_offset": 2048, 00:20:16.407 "data_size": 63488 00:20:16.407 }, 00:20:16.407 { 00:20:16.407 "name": "BaseBdev2", 00:20:16.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.407 "is_configured": false, 00:20:16.407 "data_offset": 0, 00:20:16.407 "data_size": 0 00:20:16.407 }, 00:20:16.407 { 00:20:16.407 "name": "BaseBdev3", 00:20:16.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.407 "is_configured": false, 00:20:16.407 "data_offset": 0, 00:20:16.407 "data_size": 0 00:20:16.407 } 00:20:16.407 ] 00:20:16.407 }' 00:20:16.407 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.407 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 [2024-12-09 23:01:51.837409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.666 [2024-12-09 23:01:51.837481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 [2024-12-09 23:01:51.845490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.666 [2024-12-09 23:01:51.847962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.666 [2024-12-09 23:01:51.848200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.666 [2024-12-09 23:01:51.848225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:16.666 [2024-12-09 23:01:51.848237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.666 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.667 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.667 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.667 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.667 "name": "Existed_Raid", 00:20:16.667 "uuid": "ee11da2b-6d6e-4e32-a76a-d45e5b362938", 00:20:16.667 "strip_size_kb": 64, 00:20:16.667 "state": "configuring", 00:20:16.667 "raid_level": "raid0", 00:20:16.667 "superblock": true, 00:20:16.667 "num_base_bdevs": 3, 00:20:16.667 "num_base_bdevs_discovered": 1, 00:20:16.667 "num_base_bdevs_operational": 3, 00:20:16.667 "base_bdevs_list": [ 00:20:16.667 { 00:20:16.667 "name": "BaseBdev1", 00:20:16.667 "uuid": "4acdb98a-2861-4278-a3ec-909cee18fefe", 00:20:16.667 "is_configured": true, 00:20:16.667 "data_offset": 2048, 00:20:16.667 "data_size": 63488 00:20:16.667 }, 00:20:16.667 { 00:20:16.667 "name": "BaseBdev2", 00:20:16.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.667 "is_configured": false, 00:20:16.667 "data_offset": 0, 00:20:16.667 "data_size": 0 00:20:16.667 }, 00:20:16.667 { 00:20:16.667 "name": "BaseBdev3", 00:20:16.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.667 "is_configured": false, 00:20:16.667 "data_offset": 0, 00:20:16.667 "data_size": 0 00:20:16.667 } 00:20:16.667 ] 00:20:16.667 }' 00:20:16.667 23:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.667 23:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.928 BaseBdev2 00:20:16.928 [2024-12-09 23:01:52.198463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.928 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.928 [ 00:20:16.928 { 00:20:16.928 "name": "BaseBdev2", 00:20:16.928 "aliases": [ 00:20:16.928 "d8926650-6c2f-4b45-ba27-dd07a8ad2a66" 00:20:16.928 ], 00:20:16.928 "product_name": "Malloc disk", 00:20:16.928 "block_size": 512, 00:20:16.928 "num_blocks": 65536, 00:20:16.928 "uuid": "d8926650-6c2f-4b45-ba27-dd07a8ad2a66", 00:20:16.928 "assigned_rate_limits": { 00:20:16.928 "rw_ios_per_sec": 0, 00:20:16.928 "rw_mbytes_per_sec": 0, 00:20:16.928 "r_mbytes_per_sec": 0, 00:20:16.928 "w_mbytes_per_sec": 0 00:20:16.928 }, 00:20:16.928 "claimed": true, 00:20:16.928 "claim_type": "exclusive_write", 00:20:16.928 "zoned": false, 00:20:16.928 "supported_io_types": { 00:20:16.928 "read": true, 00:20:16.928 "write": true, 00:20:16.928 "unmap": true, 00:20:16.928 "flush": true, 00:20:16.928 "reset": true, 00:20:16.928 "nvme_admin": false, 00:20:16.928 "nvme_io": false, 00:20:16.928 "nvme_io_md": false, 00:20:16.928 "write_zeroes": true, 00:20:16.928 "zcopy": true, 00:20:16.928 "get_zone_info": false, 00:20:16.928 "zone_management": false, 00:20:16.928 "zone_append": false, 00:20:16.928 "compare": false, 00:20:16.928 "compare_and_write": false, 00:20:16.928 "abort": true, 00:20:16.928 "seek_hole": false, 00:20:16.928 "seek_data": false, 00:20:16.928 "copy": true, 00:20:16.928 "nvme_iov_md": false 00:20:16.928 }, 00:20:16.928 "memory_domains": [ 00:20:16.928 { 00:20:16.928 "dma_device_id": "system", 00:20:16.928 "dma_device_type": 1 00:20:16.928 }, 00:20:16.928 { 00:20:16.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.928 "dma_device_type": 2 00:20:16.928 } 00:20:16.928 ], 00:20:16.928 "driver_specific": {} 00:20:16.928 } 00:20:16.928 ] 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.929 "name": "Existed_Raid", 00:20:16.929 "uuid": "ee11da2b-6d6e-4e32-a76a-d45e5b362938", 00:20:16.929 "strip_size_kb": 64, 00:20:16.929 "state": "configuring", 00:20:16.929 "raid_level": "raid0", 00:20:16.929 "superblock": true, 00:20:16.929 "num_base_bdevs": 3, 00:20:16.929 "num_base_bdevs_discovered": 2, 00:20:16.929 "num_base_bdevs_operational": 3, 00:20:16.929 "base_bdevs_list": [ 00:20:16.929 { 00:20:16.929 "name": "BaseBdev1", 00:20:16.929 "uuid": "4acdb98a-2861-4278-a3ec-909cee18fefe", 00:20:16.929 "is_configured": true, 00:20:16.929 "data_offset": 2048, 00:20:16.929 "data_size": 63488 00:20:16.929 }, 00:20:16.929 { 00:20:16.929 "name": "BaseBdev2", 00:20:16.929 "uuid": "d8926650-6c2f-4b45-ba27-dd07a8ad2a66", 00:20:16.929 "is_configured": true, 00:20:16.929 "data_offset": 2048, 00:20:16.929 "data_size": 63488 00:20:16.929 }, 00:20:16.929 { 00:20:16.929 "name": "BaseBdev3", 00:20:16.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.929 "is_configured": false, 00:20:16.929 "data_offset": 0, 00:20:16.929 "data_size": 0 00:20:16.929 } 00:20:16.929 ] 00:20:16.929 }' 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.929 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.500 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:17.500 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.500 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.500 [2024-12-09 23:01:52.603833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.500 [2024-12-09 23:01:52.604176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:17.500 [2024-12-09 23:01:52.604203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:17.501 [2024-12-09 23:01:52.604512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:17.501 BaseBdev3 00:20:17.501 [2024-12-09 23:01:52.604677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:17.501 [2024-12-09 23:01:52.604695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:17.501 [2024-12-09 23:01:52.604847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.501 [ 00:20:17.501 { 00:20:17.501 "name": "BaseBdev3", 00:20:17.501 "aliases": [ 00:20:17.501 "dc11cd36-dba2-4cbb-a63b-2070c83e2344" 00:20:17.501 ], 00:20:17.501 "product_name": "Malloc disk", 00:20:17.501 "block_size": 512, 00:20:17.501 "num_blocks": 65536, 00:20:17.501 "uuid": "dc11cd36-dba2-4cbb-a63b-2070c83e2344", 00:20:17.501 "assigned_rate_limits": { 00:20:17.501 "rw_ios_per_sec": 0, 00:20:17.501 "rw_mbytes_per_sec": 0, 00:20:17.501 "r_mbytes_per_sec": 0, 00:20:17.501 "w_mbytes_per_sec": 0 00:20:17.501 }, 00:20:17.501 "claimed": true, 00:20:17.501 "claim_type": "exclusive_write", 00:20:17.501 "zoned": false, 00:20:17.501 "supported_io_types": { 00:20:17.501 "read": true, 00:20:17.501 "write": true, 00:20:17.501 "unmap": true, 00:20:17.501 "flush": true, 00:20:17.501 "reset": true, 00:20:17.501 "nvme_admin": false, 00:20:17.501 "nvme_io": false, 00:20:17.501 "nvme_io_md": false, 00:20:17.501 "write_zeroes": true, 00:20:17.501 "zcopy": true, 00:20:17.501 "get_zone_info": false, 00:20:17.501 "zone_management": false, 00:20:17.501 "zone_append": false, 00:20:17.501 "compare": false, 00:20:17.501 "compare_and_write": false, 00:20:17.501 "abort": true, 00:20:17.501 "seek_hole": false, 00:20:17.501 "seek_data": false, 00:20:17.501 "copy": true, 00:20:17.501 "nvme_iov_md": false 00:20:17.501 }, 00:20:17.501 "memory_domains": [ 00:20:17.501 { 00:20:17.501 "dma_device_id": "system", 00:20:17.501 "dma_device_type": 1 00:20:17.501 }, 00:20:17.501 { 00:20:17.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.501 "dma_device_type": 2 00:20:17.501 } 00:20:17.501 ], 00:20:17.501 "driver_specific": {} 00:20:17.501 } 00:20:17.501 ] 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.501 "name": "Existed_Raid", 00:20:17.501 "uuid": "ee11da2b-6d6e-4e32-a76a-d45e5b362938", 00:20:17.501 "strip_size_kb": 64, 00:20:17.501 "state": "online", 00:20:17.501 "raid_level": "raid0", 00:20:17.501 "superblock": true, 00:20:17.501 "num_base_bdevs": 3, 00:20:17.501 "num_base_bdevs_discovered": 3, 00:20:17.501 "num_base_bdevs_operational": 3, 00:20:17.501 "base_bdevs_list": [ 00:20:17.501 { 00:20:17.501 "name": "BaseBdev1", 00:20:17.501 "uuid": "4acdb98a-2861-4278-a3ec-909cee18fefe", 00:20:17.501 "is_configured": true, 00:20:17.501 "data_offset": 2048, 00:20:17.501 "data_size": 63488 00:20:17.501 }, 00:20:17.501 { 00:20:17.501 "name": "BaseBdev2", 00:20:17.501 "uuid": "d8926650-6c2f-4b45-ba27-dd07a8ad2a66", 00:20:17.501 "is_configured": true, 00:20:17.501 "data_offset": 2048, 00:20:17.501 "data_size": 63488 00:20:17.501 }, 00:20:17.501 { 00:20:17.501 "name": "BaseBdev3", 00:20:17.501 "uuid": "dc11cd36-dba2-4cbb-a63b-2070c83e2344", 00:20:17.501 "is_configured": true, 00:20:17.501 "data_offset": 2048, 00:20:17.501 "data_size": 63488 00:20:17.501 } 00:20:17.501 ] 00:20:17.501 }' 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.501 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.761 23:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:17.761 [2024-12-09 23:01:52.992395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.761 "name": "Existed_Raid", 00:20:17.761 "aliases": [ 00:20:17.761 "ee11da2b-6d6e-4e32-a76a-d45e5b362938" 00:20:17.761 ], 00:20:17.761 "product_name": "Raid Volume", 00:20:17.761 "block_size": 512, 00:20:17.761 "num_blocks": 190464, 00:20:17.761 "uuid": "ee11da2b-6d6e-4e32-a76a-d45e5b362938", 00:20:17.761 "assigned_rate_limits": { 00:20:17.761 "rw_ios_per_sec": 0, 00:20:17.761 "rw_mbytes_per_sec": 0, 00:20:17.761 "r_mbytes_per_sec": 0, 00:20:17.761 "w_mbytes_per_sec": 0 00:20:17.761 }, 00:20:17.761 "claimed": false, 00:20:17.761 "zoned": false, 00:20:17.761 "supported_io_types": { 00:20:17.761 "read": true, 00:20:17.761 "write": true, 00:20:17.761 "unmap": true, 00:20:17.761 "flush": true, 00:20:17.761 "reset": true, 00:20:17.761 "nvme_admin": false, 00:20:17.761 "nvme_io": false, 00:20:17.761 "nvme_io_md": false, 00:20:17.761 "write_zeroes": true, 00:20:17.761 "zcopy": false, 00:20:17.761 "get_zone_info": false, 00:20:17.761 "zone_management": false, 00:20:17.761 "zone_append": false, 00:20:17.761 "compare": false, 00:20:17.761 "compare_and_write": false, 00:20:17.761 "abort": false, 00:20:17.761 "seek_hole": false, 00:20:17.761 "seek_data": false, 00:20:17.761 "copy": false, 00:20:17.761 "nvme_iov_md": false 00:20:17.761 }, 00:20:17.761 "memory_domains": [ 00:20:17.761 { 00:20:17.761 "dma_device_id": "system", 00:20:17.761 "dma_device_type": 1 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.761 "dma_device_type": 2 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "dma_device_id": "system", 00:20:17.761 "dma_device_type": 1 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.761 "dma_device_type": 2 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "dma_device_id": "system", 00:20:17.761 "dma_device_type": 1 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.761 "dma_device_type": 2 00:20:17.761 } 00:20:17.761 ], 00:20:17.761 "driver_specific": { 00:20:17.761 "raid": { 00:20:17.761 "uuid": "ee11da2b-6d6e-4e32-a76a-d45e5b362938", 00:20:17.761 "strip_size_kb": 64, 00:20:17.761 "state": "online", 00:20:17.761 "raid_level": "raid0", 00:20:17.761 "superblock": true, 00:20:17.761 "num_base_bdevs": 3, 00:20:17.761 "num_base_bdevs_discovered": 3, 00:20:17.761 "num_base_bdevs_operational": 3, 00:20:17.761 "base_bdevs_list": [ 00:20:17.761 { 00:20:17.761 "name": "BaseBdev1", 00:20:17.761 "uuid": "4acdb98a-2861-4278-a3ec-909cee18fefe", 00:20:17.761 "is_configured": true, 00:20:17.761 "data_offset": 2048, 00:20:17.761 "data_size": 63488 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "name": "BaseBdev2", 00:20:17.761 "uuid": "d8926650-6c2f-4b45-ba27-dd07a8ad2a66", 00:20:17.761 "is_configured": true, 00:20:17.761 "data_offset": 2048, 00:20:17.761 "data_size": 63488 00:20:17.761 }, 00:20:17.761 { 00:20:17.761 "name": "BaseBdev3", 00:20:17.761 "uuid": "dc11cd36-dba2-4cbb-a63b-2070c83e2344", 00:20:17.761 "is_configured": true, 00:20:17.761 "data_offset": 2048, 00:20:17.761 "data_size": 63488 00:20:17.761 } 00:20:17.761 ] 00:20:17.761 } 00:20:17.761 } 00:20:17.761 }' 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:17.761 BaseBdev2 00:20:17.761 BaseBdev3' 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.761 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.022 [2024-12-09 23:01:53.200093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:18.022 [2024-12-09 23:01:53.200346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.022 [2024-12-09 23:01:53.201094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.022 "name": "Existed_Raid", 00:20:18.022 "uuid": "ee11da2b-6d6e-4e32-a76a-d45e5b362938", 00:20:18.022 "strip_size_kb": 64, 00:20:18.022 "state": "offline", 00:20:18.022 "raid_level": "raid0", 00:20:18.022 "superblock": true, 00:20:18.022 "num_base_bdevs": 3, 00:20:18.022 "num_base_bdevs_discovered": 2, 00:20:18.022 "num_base_bdevs_operational": 2, 00:20:18.022 "base_bdevs_list": [ 00:20:18.022 { 00:20:18.022 "name": null, 00:20:18.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.022 "is_configured": false, 00:20:18.022 "data_offset": 0, 00:20:18.022 "data_size": 63488 00:20:18.022 }, 00:20:18.022 { 00:20:18.022 "name": "BaseBdev2", 00:20:18.022 "uuid": "d8926650-6c2f-4b45-ba27-dd07a8ad2a66", 00:20:18.022 "is_configured": true, 00:20:18.022 "data_offset": 2048, 00:20:18.022 "data_size": 63488 00:20:18.022 }, 00:20:18.022 { 00:20:18.022 "name": "BaseBdev3", 00:20:18.022 "uuid": "dc11cd36-dba2-4cbb-a63b-2070c83e2344", 00:20:18.022 "is_configured": true, 00:20:18.022 "data_offset": 2048, 00:20:18.022 "data_size": 63488 00:20:18.022 } 00:20:18.022 ] 00:20:18.022 }' 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.022 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.283 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.283 [2024-12-09 23:01:53.634285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 [2024-12-09 23:01:53.737610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:18.543 [2024-12-09 23:01:53.737836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 BaseBdev2 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:18.543 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.544 [ 00:20:18.544 { 00:20:18.544 "name": "BaseBdev2", 00:20:18.544 "aliases": [ 00:20:18.544 "3f8f59c0-f4e1-4886-bec2-2125815eb9cd" 00:20:18.544 ], 00:20:18.544 "product_name": "Malloc disk", 00:20:18.544 "block_size": 512, 00:20:18.544 "num_blocks": 65536, 00:20:18.544 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:18.544 "assigned_rate_limits": { 00:20:18.544 "rw_ios_per_sec": 0, 00:20:18.544 "rw_mbytes_per_sec": 0, 00:20:18.544 "r_mbytes_per_sec": 0, 00:20:18.544 "w_mbytes_per_sec": 0 00:20:18.544 }, 00:20:18.544 "claimed": false, 00:20:18.544 "zoned": false, 00:20:18.544 "supported_io_types": { 00:20:18.544 "read": true, 00:20:18.544 "write": true, 00:20:18.544 "unmap": true, 00:20:18.544 "flush": true, 00:20:18.544 "reset": true, 00:20:18.544 "nvme_admin": false, 00:20:18.544 "nvme_io": false, 00:20:18.544 "nvme_io_md": false, 00:20:18.544 "write_zeroes": true, 00:20:18.544 "zcopy": true, 00:20:18.544 "get_zone_info": false, 00:20:18.544 "zone_management": false, 00:20:18.544 "zone_append": false, 00:20:18.544 "compare": false, 00:20:18.544 "compare_and_write": false, 00:20:18.544 "abort": true, 00:20:18.544 "seek_hole": false, 00:20:18.544 "seek_data": false, 00:20:18.544 "copy": true, 00:20:18.544 "nvme_iov_md": false 00:20:18.544 }, 00:20:18.544 "memory_domains": [ 00:20:18.544 { 00:20:18.544 "dma_device_id": "system", 00:20:18.544 "dma_device_type": 1 00:20:18.544 }, 00:20:18.544 { 00:20:18.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.544 "dma_device_type": 2 00:20:18.544 } 00:20:18.544 ], 00:20:18.544 "driver_specific": {} 00:20:18.544 } 00:20:18.544 ] 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:18.544 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.804 BaseBdev3 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.804 [ 00:20:18.804 { 00:20:18.804 "name": "BaseBdev3", 00:20:18.804 "aliases": [ 00:20:18.804 "61a716c6-1b18-41d7-b0d6-e200ba2f4d03" 00:20:18.804 ], 00:20:18.804 "product_name": "Malloc disk", 00:20:18.804 "block_size": 512, 00:20:18.804 "num_blocks": 65536, 00:20:18.804 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:18.804 "assigned_rate_limits": { 00:20:18.804 "rw_ios_per_sec": 0, 00:20:18.804 "rw_mbytes_per_sec": 0, 00:20:18.804 "r_mbytes_per_sec": 0, 00:20:18.804 "w_mbytes_per_sec": 0 00:20:18.804 }, 00:20:18.804 "claimed": false, 00:20:18.804 "zoned": false, 00:20:18.804 "supported_io_types": { 00:20:18.804 "read": true, 00:20:18.804 "write": true, 00:20:18.804 "unmap": true, 00:20:18.804 "flush": true, 00:20:18.804 "reset": true, 00:20:18.804 "nvme_admin": false, 00:20:18.804 "nvme_io": false, 00:20:18.804 "nvme_io_md": false, 00:20:18.804 "write_zeroes": true, 00:20:18.804 "zcopy": true, 00:20:18.804 "get_zone_info": false, 00:20:18.804 "zone_management": false, 00:20:18.804 "zone_append": false, 00:20:18.804 "compare": false, 00:20:18.804 "compare_and_write": false, 00:20:18.804 "abort": true, 00:20:18.804 "seek_hole": false, 00:20:18.804 "seek_data": false, 00:20:18.804 "copy": true, 00:20:18.804 "nvme_iov_md": false 00:20:18.804 }, 00:20:18.804 "memory_domains": [ 00:20:18.804 { 00:20:18.804 "dma_device_id": "system", 00:20:18.804 "dma_device_type": 1 00:20:18.804 }, 00:20:18.804 { 00:20:18.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.804 "dma_device_type": 2 00:20:18.804 } 00:20:18.804 ], 00:20:18.804 "driver_specific": {} 00:20:18.804 } 00:20:18.804 ] 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.804 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.804 [2024-12-09 23:01:53.969416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:18.804 [2024-12-09 23:01:53.969648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:18.804 [2024-12-09 23:01:53.969748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.805 [2024-12-09 23:01:53.971982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.805 23:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.805 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.805 "name": "Existed_Raid", 00:20:18.805 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:18.805 "strip_size_kb": 64, 00:20:18.805 "state": "configuring", 00:20:18.805 "raid_level": "raid0", 00:20:18.805 "superblock": true, 00:20:18.805 "num_base_bdevs": 3, 00:20:18.805 "num_base_bdevs_discovered": 2, 00:20:18.805 "num_base_bdevs_operational": 3, 00:20:18.805 "base_bdevs_list": [ 00:20:18.805 { 00:20:18.805 "name": "BaseBdev1", 00:20:18.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.805 "is_configured": false, 00:20:18.805 "data_offset": 0, 00:20:18.805 "data_size": 0 00:20:18.805 }, 00:20:18.805 { 00:20:18.805 "name": "BaseBdev2", 00:20:18.805 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:18.805 "is_configured": true, 00:20:18.805 "data_offset": 2048, 00:20:18.805 "data_size": 63488 00:20:18.805 }, 00:20:18.805 { 00:20:18.805 "name": "BaseBdev3", 00:20:18.805 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:18.805 "is_configured": true, 00:20:18.805 "data_offset": 2048, 00:20:18.805 "data_size": 63488 00:20:18.805 } 00:20:18.805 ] 00:20:18.805 }' 00:20:18.805 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.805 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.066 [2024-12-09 23:01:54.281524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.066 "name": "Existed_Raid", 00:20:19.066 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:19.066 "strip_size_kb": 64, 00:20:19.066 "state": "configuring", 00:20:19.066 "raid_level": "raid0", 00:20:19.066 "superblock": true, 00:20:19.066 "num_base_bdevs": 3, 00:20:19.066 "num_base_bdevs_discovered": 1, 00:20:19.066 "num_base_bdevs_operational": 3, 00:20:19.066 "base_bdevs_list": [ 00:20:19.066 { 00:20:19.066 "name": "BaseBdev1", 00:20:19.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.066 "is_configured": false, 00:20:19.066 "data_offset": 0, 00:20:19.066 "data_size": 0 00:20:19.066 }, 00:20:19.066 { 00:20:19.066 "name": null, 00:20:19.066 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:19.066 "is_configured": false, 00:20:19.066 "data_offset": 0, 00:20:19.066 "data_size": 63488 00:20:19.066 }, 00:20:19.066 { 00:20:19.066 "name": "BaseBdev3", 00:20:19.066 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:19.066 "is_configured": true, 00:20:19.066 "data_offset": 2048, 00:20:19.066 "data_size": 63488 00:20:19.066 } 00:20:19.066 ] 00:20:19.066 }' 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.066 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.344 [2024-12-09 23:01:54.681877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:19.344 BaseBdev1 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.344 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.638 [ 00:20:19.638 { 00:20:19.638 "name": "BaseBdev1", 00:20:19.638 "aliases": [ 00:20:19.638 "e9a2edbc-2f70-4678-ac50-4803aa772069" 00:20:19.638 ], 00:20:19.638 "product_name": "Malloc disk", 00:20:19.638 "block_size": 512, 00:20:19.638 "num_blocks": 65536, 00:20:19.638 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:19.638 "assigned_rate_limits": { 00:20:19.638 "rw_ios_per_sec": 0, 00:20:19.638 "rw_mbytes_per_sec": 0, 00:20:19.638 "r_mbytes_per_sec": 0, 00:20:19.638 "w_mbytes_per_sec": 0 00:20:19.638 }, 00:20:19.638 "claimed": true, 00:20:19.638 "claim_type": "exclusive_write", 00:20:19.638 "zoned": false, 00:20:19.638 "supported_io_types": { 00:20:19.638 "read": true, 00:20:19.638 "write": true, 00:20:19.638 "unmap": true, 00:20:19.638 "flush": true, 00:20:19.638 "reset": true, 00:20:19.638 "nvme_admin": false, 00:20:19.638 "nvme_io": false, 00:20:19.638 "nvme_io_md": false, 00:20:19.638 "write_zeroes": true, 00:20:19.638 "zcopy": true, 00:20:19.638 "get_zone_info": false, 00:20:19.638 "zone_management": false, 00:20:19.638 "zone_append": false, 00:20:19.638 "compare": false, 00:20:19.638 "compare_and_write": false, 00:20:19.638 "abort": true, 00:20:19.638 "seek_hole": false, 00:20:19.638 "seek_data": false, 00:20:19.638 "copy": true, 00:20:19.638 "nvme_iov_md": false 00:20:19.638 }, 00:20:19.638 "memory_domains": [ 00:20:19.638 { 00:20:19.638 "dma_device_id": "system", 00:20:19.638 "dma_device_type": 1 00:20:19.638 }, 00:20:19.638 { 00:20:19.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.638 "dma_device_type": 2 00:20:19.638 } 00:20:19.638 ], 00:20:19.638 "driver_specific": {} 00:20:19.638 } 00:20:19.638 ] 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.638 "name": "Existed_Raid", 00:20:19.638 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:19.638 "strip_size_kb": 64, 00:20:19.638 "state": "configuring", 00:20:19.638 "raid_level": "raid0", 00:20:19.638 "superblock": true, 00:20:19.638 "num_base_bdevs": 3, 00:20:19.638 "num_base_bdevs_discovered": 2, 00:20:19.638 "num_base_bdevs_operational": 3, 00:20:19.638 "base_bdevs_list": [ 00:20:19.638 { 00:20:19.638 "name": "BaseBdev1", 00:20:19.638 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:19.638 "is_configured": true, 00:20:19.638 "data_offset": 2048, 00:20:19.638 "data_size": 63488 00:20:19.638 }, 00:20:19.638 { 00:20:19.638 "name": null, 00:20:19.638 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:19.638 "is_configured": false, 00:20:19.638 "data_offset": 0, 00:20:19.638 "data_size": 63488 00:20:19.638 }, 00:20:19.638 { 00:20:19.638 "name": "BaseBdev3", 00:20:19.638 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:19.638 "is_configured": true, 00:20:19.638 "data_offset": 2048, 00:20:19.638 "data_size": 63488 00:20:19.638 } 00:20:19.638 ] 00:20:19.638 }' 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.638 23:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.904 [2024-12-09 23:01:55.066062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.904 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.905 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.905 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.905 "name": "Existed_Raid", 00:20:19.905 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:19.905 "strip_size_kb": 64, 00:20:19.905 "state": "configuring", 00:20:19.905 "raid_level": "raid0", 00:20:19.905 "superblock": true, 00:20:19.905 "num_base_bdevs": 3, 00:20:19.905 "num_base_bdevs_discovered": 1, 00:20:19.905 "num_base_bdevs_operational": 3, 00:20:19.905 "base_bdevs_list": [ 00:20:19.905 { 00:20:19.905 "name": "BaseBdev1", 00:20:19.905 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:19.905 "is_configured": true, 00:20:19.905 "data_offset": 2048, 00:20:19.905 "data_size": 63488 00:20:19.905 }, 00:20:19.905 { 00:20:19.905 "name": null, 00:20:19.905 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:19.905 "is_configured": false, 00:20:19.905 "data_offset": 0, 00:20:19.905 "data_size": 63488 00:20:19.905 }, 00:20:19.905 { 00:20:19.905 "name": null, 00:20:19.905 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:19.905 "is_configured": false, 00:20:19.905 "data_offset": 0, 00:20:19.905 "data_size": 63488 00:20:19.905 } 00:20:19.905 ] 00:20:19.905 }' 00:20:19.905 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.905 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.169 [2024-12-09 23:01:55.422198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.169 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.170 "name": "Existed_Raid", 00:20:20.170 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:20.170 "strip_size_kb": 64, 00:20:20.170 "state": "configuring", 00:20:20.170 "raid_level": "raid0", 00:20:20.170 "superblock": true, 00:20:20.170 "num_base_bdevs": 3, 00:20:20.170 "num_base_bdevs_discovered": 2, 00:20:20.170 "num_base_bdevs_operational": 3, 00:20:20.170 "base_bdevs_list": [ 00:20:20.170 { 00:20:20.170 "name": "BaseBdev1", 00:20:20.170 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:20.170 "is_configured": true, 00:20:20.170 "data_offset": 2048, 00:20:20.170 "data_size": 63488 00:20:20.170 }, 00:20:20.170 { 00:20:20.170 "name": null, 00:20:20.170 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:20.170 "is_configured": false, 00:20:20.170 "data_offset": 0, 00:20:20.170 "data_size": 63488 00:20:20.170 }, 00:20:20.170 { 00:20:20.170 "name": "BaseBdev3", 00:20:20.170 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:20.170 "is_configured": true, 00:20:20.170 "data_offset": 2048, 00:20:20.170 "data_size": 63488 00:20:20.170 } 00:20:20.170 ] 00:20:20.170 }' 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.170 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.432 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.432 [2024-12-09 23:01:55.786287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.693 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.694 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.694 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.694 "name": "Existed_Raid", 00:20:20.694 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:20.694 "strip_size_kb": 64, 00:20:20.694 "state": "configuring", 00:20:20.694 "raid_level": "raid0", 00:20:20.694 "superblock": true, 00:20:20.694 "num_base_bdevs": 3, 00:20:20.694 "num_base_bdevs_discovered": 1, 00:20:20.694 "num_base_bdevs_operational": 3, 00:20:20.694 "base_bdevs_list": [ 00:20:20.694 { 00:20:20.694 "name": null, 00:20:20.694 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:20.694 "is_configured": false, 00:20:20.694 "data_offset": 0, 00:20:20.694 "data_size": 63488 00:20:20.694 }, 00:20:20.694 { 00:20:20.694 "name": null, 00:20:20.694 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:20.694 "is_configured": false, 00:20:20.694 "data_offset": 0, 00:20:20.694 "data_size": 63488 00:20:20.694 }, 00:20:20.694 { 00:20:20.694 "name": "BaseBdev3", 00:20:20.694 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:20.694 "is_configured": true, 00:20:20.694 "data_offset": 2048, 00:20:20.694 "data_size": 63488 00:20:20.694 } 00:20:20.694 ] 00:20:20.694 }' 00:20:20.694 23:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.694 23:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.955 [2024-12-09 23:01:56.214545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.955 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.955 "name": "Existed_Raid", 00:20:20.955 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:20.956 "strip_size_kb": 64, 00:20:20.956 "state": "configuring", 00:20:20.956 "raid_level": "raid0", 00:20:20.956 "superblock": true, 00:20:20.956 "num_base_bdevs": 3, 00:20:20.956 "num_base_bdevs_discovered": 2, 00:20:20.956 "num_base_bdevs_operational": 3, 00:20:20.956 "base_bdevs_list": [ 00:20:20.956 { 00:20:20.956 "name": null, 00:20:20.956 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:20.956 "is_configured": false, 00:20:20.956 "data_offset": 0, 00:20:20.956 "data_size": 63488 00:20:20.956 }, 00:20:20.956 { 00:20:20.956 "name": "BaseBdev2", 00:20:20.956 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:20.956 "is_configured": true, 00:20:20.956 "data_offset": 2048, 00:20:20.956 "data_size": 63488 00:20:20.956 }, 00:20:20.956 { 00:20:20.956 "name": "BaseBdev3", 00:20:20.956 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:20.956 "is_configured": true, 00:20:20.956 "data_offset": 2048, 00:20:20.956 "data_size": 63488 00:20:20.956 } 00:20:20.956 ] 00:20:20.956 }' 00:20:20.956 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.956 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.216 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e9a2edbc-2f70-4678-ac50-4803aa772069 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 [2024-12-09 23:01:56.635233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:21.482 NewBaseBdev 00:20:21.482 [2024-12-09 23:01:56.635499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:21.482 [2024-12-09 23:01:56.635518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:21.482 [2024-12-09 23:01:56.635811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:21.482 [2024-12-09 23:01:56.635961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:21.482 [2024-12-09 23:01:56.635969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:21.482 [2024-12-09 23:01:56.636138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 [ 00:20:21.482 { 00:20:21.482 "name": "NewBaseBdev", 00:20:21.482 "aliases": [ 00:20:21.482 "e9a2edbc-2f70-4678-ac50-4803aa772069" 00:20:21.482 ], 00:20:21.482 "product_name": "Malloc disk", 00:20:21.482 "block_size": 512, 00:20:21.482 "num_blocks": 65536, 00:20:21.482 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:21.482 "assigned_rate_limits": { 00:20:21.482 "rw_ios_per_sec": 0, 00:20:21.482 "rw_mbytes_per_sec": 0, 00:20:21.482 "r_mbytes_per_sec": 0, 00:20:21.482 "w_mbytes_per_sec": 0 00:20:21.482 }, 00:20:21.482 "claimed": true, 00:20:21.482 "claim_type": "exclusive_write", 00:20:21.482 "zoned": false, 00:20:21.482 "supported_io_types": { 00:20:21.482 "read": true, 00:20:21.482 "write": true, 00:20:21.482 "unmap": true, 00:20:21.482 "flush": true, 00:20:21.482 "reset": true, 00:20:21.482 "nvme_admin": false, 00:20:21.482 "nvme_io": false, 00:20:21.482 "nvme_io_md": false, 00:20:21.482 "write_zeroes": true, 00:20:21.482 "zcopy": true, 00:20:21.482 "get_zone_info": false, 00:20:21.482 "zone_management": false, 00:20:21.482 "zone_append": false, 00:20:21.482 "compare": false, 00:20:21.482 "compare_and_write": false, 00:20:21.482 "abort": true, 00:20:21.482 "seek_hole": false, 00:20:21.482 "seek_data": false, 00:20:21.482 "copy": true, 00:20:21.482 "nvme_iov_md": false 00:20:21.482 }, 00:20:21.482 "memory_domains": [ 00:20:21.482 { 00:20:21.482 "dma_device_id": "system", 00:20:21.482 "dma_device_type": 1 00:20:21.482 }, 00:20:21.482 { 00:20:21.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.482 "dma_device_type": 2 00:20:21.482 } 00:20:21.482 ], 00:20:21.482 "driver_specific": {} 00:20:21.482 } 00:20:21.482 ] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.482 "name": "Existed_Raid", 00:20:21.482 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:21.482 "strip_size_kb": 64, 00:20:21.482 "state": "online", 00:20:21.482 "raid_level": "raid0", 00:20:21.482 "superblock": true, 00:20:21.482 "num_base_bdevs": 3, 00:20:21.482 "num_base_bdevs_discovered": 3, 00:20:21.482 "num_base_bdevs_operational": 3, 00:20:21.482 "base_bdevs_list": [ 00:20:21.482 { 00:20:21.482 "name": "NewBaseBdev", 00:20:21.482 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:21.482 "is_configured": true, 00:20:21.482 "data_offset": 2048, 00:20:21.482 "data_size": 63488 00:20:21.482 }, 00:20:21.482 { 00:20:21.482 "name": "BaseBdev2", 00:20:21.482 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:21.482 "is_configured": true, 00:20:21.482 "data_offset": 2048, 00:20:21.482 "data_size": 63488 00:20:21.482 }, 00:20:21.482 { 00:20:21.482 "name": "BaseBdev3", 00:20:21.482 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:21.482 "is_configured": true, 00:20:21.482 "data_offset": 2048, 00:20:21.482 "data_size": 63488 00:20:21.482 } 00:20:21.482 ] 00:20:21.482 }' 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.482 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.744 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:21.744 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:21.744 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:21.744 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.745 [2024-12-09 23:01:56.979754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.745 23:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:21.745 "name": "Existed_Raid", 00:20:21.745 "aliases": [ 00:20:21.745 "e81885c1-faaf-4dd2-8f9a-454518ec1c5c" 00:20:21.745 ], 00:20:21.745 "product_name": "Raid Volume", 00:20:21.745 "block_size": 512, 00:20:21.745 "num_blocks": 190464, 00:20:21.745 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:21.745 "assigned_rate_limits": { 00:20:21.745 "rw_ios_per_sec": 0, 00:20:21.745 "rw_mbytes_per_sec": 0, 00:20:21.745 "r_mbytes_per_sec": 0, 00:20:21.745 "w_mbytes_per_sec": 0 00:20:21.745 }, 00:20:21.745 "claimed": false, 00:20:21.745 "zoned": false, 00:20:21.745 "supported_io_types": { 00:20:21.745 "read": true, 00:20:21.745 "write": true, 00:20:21.745 "unmap": true, 00:20:21.745 "flush": true, 00:20:21.745 "reset": true, 00:20:21.745 "nvme_admin": false, 00:20:21.745 "nvme_io": false, 00:20:21.745 "nvme_io_md": false, 00:20:21.745 "write_zeroes": true, 00:20:21.745 "zcopy": false, 00:20:21.745 "get_zone_info": false, 00:20:21.745 "zone_management": false, 00:20:21.745 "zone_append": false, 00:20:21.745 "compare": false, 00:20:21.745 "compare_and_write": false, 00:20:21.745 "abort": false, 00:20:21.745 "seek_hole": false, 00:20:21.745 "seek_data": false, 00:20:21.745 "copy": false, 00:20:21.745 "nvme_iov_md": false 00:20:21.745 }, 00:20:21.745 "memory_domains": [ 00:20:21.745 { 00:20:21.745 "dma_device_id": "system", 00:20:21.745 "dma_device_type": 1 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.745 "dma_device_type": 2 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "dma_device_id": "system", 00:20:21.745 "dma_device_type": 1 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.745 "dma_device_type": 2 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "dma_device_id": "system", 00:20:21.745 "dma_device_type": 1 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.745 "dma_device_type": 2 00:20:21.745 } 00:20:21.745 ], 00:20:21.745 "driver_specific": { 00:20:21.745 "raid": { 00:20:21.745 "uuid": "e81885c1-faaf-4dd2-8f9a-454518ec1c5c", 00:20:21.745 "strip_size_kb": 64, 00:20:21.745 "state": "online", 00:20:21.745 "raid_level": "raid0", 00:20:21.745 "superblock": true, 00:20:21.745 "num_base_bdevs": 3, 00:20:21.745 "num_base_bdevs_discovered": 3, 00:20:21.745 "num_base_bdevs_operational": 3, 00:20:21.745 "base_bdevs_list": [ 00:20:21.745 { 00:20:21.745 "name": "NewBaseBdev", 00:20:21.745 "uuid": "e9a2edbc-2f70-4678-ac50-4803aa772069", 00:20:21.745 "is_configured": true, 00:20:21.745 "data_offset": 2048, 00:20:21.745 "data_size": 63488 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "name": "BaseBdev2", 00:20:21.745 "uuid": "3f8f59c0-f4e1-4886-bec2-2125815eb9cd", 00:20:21.745 "is_configured": true, 00:20:21.745 "data_offset": 2048, 00:20:21.745 "data_size": 63488 00:20:21.745 }, 00:20:21.745 { 00:20:21.745 "name": "BaseBdev3", 00:20:21.745 "uuid": "61a716c6-1b18-41d7-b0d6-e200ba2f4d03", 00:20:21.745 "is_configured": true, 00:20:21.745 "data_offset": 2048, 00:20:21.745 "data_size": 63488 00:20:21.745 } 00:20:21.745 ] 00:20:21.745 } 00:20:21.745 } 00:20:21.745 }' 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:21.745 BaseBdev2 00:20:21.745 BaseBdev3' 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.745 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.006 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.007 [2024-12-09 23:01:57.175438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.007 [2024-12-09 23:01:57.175635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.007 [2024-12-09 23:01:57.175752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.007 [2024-12-09 23:01:57.175821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.007 [2024-12-09 23:01:57.175835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62925 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62925 ']' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62925 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62925 00:20:22.007 killing process with pid 62925 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62925' 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62925 00:20:22.007 23:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62925 00:20:22.007 [2024-12-09 23:01:57.210536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.268 [2024-12-09 23:01:57.429192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.215 ************************************ 00:20:23.215 END TEST raid_state_function_test_sb 00:20:23.215 ************************************ 00:20:23.215 23:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:23.215 00:20:23.215 real 0m8.118s 00:20:23.215 user 0m12.515s 00:20:23.215 sys 0m1.535s 00:20:23.215 23:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.215 23:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.215 23:01:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:20:23.215 23:01:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:23.215 23:01:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.215 23:01:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.215 ************************************ 00:20:23.215 START TEST raid_superblock_test 00:20:23.215 ************************************ 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63523 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63523 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63523 ']' 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.215 23:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.216 [2024-12-09 23:01:58.405741] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:23.216 [2024-12-09 23:01:58.406168] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63523 ] 00:20:23.216 [2024-12-09 23:01:58.564410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.476 [2024-12-09 23:01:58.703591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.736 [2024-12-09 23:01:58.869508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.736 [2024-12-09 23:01:58.869559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.996 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.996 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:23.996 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:23.996 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.997 malloc1 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.997 [2024-12-09 23:01:59.317046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:23.997 [2024-12-09 23:01:59.317339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.997 [2024-12-09 23:01:59.317396] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:23.997 [2024-12-09 23:01:59.317613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.997 [2024-12-09 23:01:59.320268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.997 [2024-12-09 23:01:59.320322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:23.997 pt1 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.997 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.997 malloc2 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 [2024-12-09 23:01:59.360997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.258 [2024-12-09 23:01:59.361082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.258 [2024-12-09 23:01:59.361129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:24.258 [2024-12-09 23:01:59.361139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.258 [2024-12-09 23:01:59.363700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.258 [2024-12-09 23:01:59.363915] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.258 pt2 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 malloc3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 [2024-12-09 23:01:59.411986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:24.258 [2024-12-09 23:01:59.412075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.258 [2024-12-09 23:01:59.412123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:24.258 [2024-12-09 23:01:59.412134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.258 [2024-12-09 23:01:59.414824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.258 [2024-12-09 23:01:59.414887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:24.258 pt3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 [2024-12-09 23:01:59.420053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:24.258 [2024-12-09 23:01:59.422548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:24.258 [2024-12-09 23:01:59.422800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:24.258 [2024-12-09 23:01:59.423035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:24.258 [2024-12-09 23:01:59.423061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:24.258 [2024-12-09 23:01:59.423425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:24.258 [2024-12-09 23:01:59.423599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:24.258 [2024-12-09 23:01:59.423609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:24.258 [2024-12-09 23:01:59.423798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.258 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.258 "name": "raid_bdev1", 00:20:24.258 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:24.258 "strip_size_kb": 64, 00:20:24.258 "state": "online", 00:20:24.258 "raid_level": "raid0", 00:20:24.258 "superblock": true, 00:20:24.258 "num_base_bdevs": 3, 00:20:24.258 "num_base_bdevs_discovered": 3, 00:20:24.258 "num_base_bdevs_operational": 3, 00:20:24.258 "base_bdevs_list": [ 00:20:24.258 { 00:20:24.258 "name": "pt1", 00:20:24.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.258 "is_configured": true, 00:20:24.258 "data_offset": 2048, 00:20:24.258 "data_size": 63488 00:20:24.258 }, 00:20:24.258 { 00:20:24.258 "name": "pt2", 00:20:24.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.258 "is_configured": true, 00:20:24.258 "data_offset": 2048, 00:20:24.258 "data_size": 63488 00:20:24.258 }, 00:20:24.258 { 00:20:24.258 "name": "pt3", 00:20:24.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.258 "is_configured": true, 00:20:24.258 "data_offset": 2048, 00:20:24.258 "data_size": 63488 00:20:24.259 } 00:20:24.259 ] 00:20:24.259 }' 00:20:24.259 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.259 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.520 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.520 [2024-12-09 23:01:59.772476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.521 "name": "raid_bdev1", 00:20:24.521 "aliases": [ 00:20:24.521 "f495298a-05bd-41dd-a8ff-124da6b38c0e" 00:20:24.521 ], 00:20:24.521 "product_name": "Raid Volume", 00:20:24.521 "block_size": 512, 00:20:24.521 "num_blocks": 190464, 00:20:24.521 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:24.521 "assigned_rate_limits": { 00:20:24.521 "rw_ios_per_sec": 0, 00:20:24.521 "rw_mbytes_per_sec": 0, 00:20:24.521 "r_mbytes_per_sec": 0, 00:20:24.521 "w_mbytes_per_sec": 0 00:20:24.521 }, 00:20:24.521 "claimed": false, 00:20:24.521 "zoned": false, 00:20:24.521 "supported_io_types": { 00:20:24.521 "read": true, 00:20:24.521 "write": true, 00:20:24.521 "unmap": true, 00:20:24.521 "flush": true, 00:20:24.521 "reset": true, 00:20:24.521 "nvme_admin": false, 00:20:24.521 "nvme_io": false, 00:20:24.521 "nvme_io_md": false, 00:20:24.521 "write_zeroes": true, 00:20:24.521 "zcopy": false, 00:20:24.521 "get_zone_info": false, 00:20:24.521 "zone_management": false, 00:20:24.521 "zone_append": false, 00:20:24.521 "compare": false, 00:20:24.521 "compare_and_write": false, 00:20:24.521 "abort": false, 00:20:24.521 "seek_hole": false, 00:20:24.521 "seek_data": false, 00:20:24.521 "copy": false, 00:20:24.521 "nvme_iov_md": false 00:20:24.521 }, 00:20:24.521 "memory_domains": [ 00:20:24.521 { 00:20:24.521 "dma_device_id": "system", 00:20:24.521 "dma_device_type": 1 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.521 "dma_device_type": 2 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "dma_device_id": "system", 00:20:24.521 "dma_device_type": 1 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.521 "dma_device_type": 2 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "dma_device_id": "system", 00:20:24.521 "dma_device_type": 1 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.521 "dma_device_type": 2 00:20:24.521 } 00:20:24.521 ], 00:20:24.521 "driver_specific": { 00:20:24.521 "raid": { 00:20:24.521 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:24.521 "strip_size_kb": 64, 00:20:24.521 "state": "online", 00:20:24.521 "raid_level": "raid0", 00:20:24.521 "superblock": true, 00:20:24.521 "num_base_bdevs": 3, 00:20:24.521 "num_base_bdevs_discovered": 3, 00:20:24.521 "num_base_bdevs_operational": 3, 00:20:24.521 "base_bdevs_list": [ 00:20:24.521 { 00:20:24.521 "name": "pt1", 00:20:24.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "name": "pt2", 00:20:24.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "name": "pt3", 00:20:24.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 } 00:20:24.521 ] 00:20:24.521 } 00:20:24.521 } 00:20:24.521 }' 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:24.521 pt2 00:20:24.521 pt3' 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.521 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 [2024-12-09 23:01:59.976718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.782 23:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f495298a-05bd-41dd-a8ff-124da6b38c0e 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f495298a-05bd-41dd-a8ff-124da6b38c0e ']' 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 [2024-12-09 23:02:00.008225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.782 [2024-12-09 23:02:00.008483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.782 [2024-12-09 23:02:00.008652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.782 [2024-12-09 23:02:00.008777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.782 [2024-12-09 23:02:00.008793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.782 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.783 [2024-12-09 23:02:00.112263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:24.783 [2024-12-09 23:02:00.115657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:24.783 [2024-12-09 23:02:00.115936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:24.783 [2024-12-09 23:02:00.116020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:24.783 [2024-12-09 23:02:00.116096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:24.783 [2024-12-09 23:02:00.116151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:24.783 [2024-12-09 23:02:00.116171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.783 [2024-12-09 23:02:00.116185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:24.783 request: 00:20:24.783 { 00:20:24.783 "name": "raid_bdev1", 00:20:24.783 "raid_level": "raid0", 00:20:24.783 "base_bdevs": [ 00:20:24.783 "malloc1", 00:20:24.783 "malloc2", 00:20:24.783 "malloc3" 00:20:24.783 ], 00:20:24.783 "strip_size_kb": 64, 00:20:24.783 "superblock": false, 00:20:24.783 "method": "bdev_raid_create", 00:20:24.783 "req_id": 1 00:20:24.783 } 00:20:24.783 Got JSON-RPC error response 00:20:24.783 response: 00:20:24.783 { 00:20:24.783 "code": -17, 00:20:24.783 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:24.783 } 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.783 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.043 [2024-12-09 23:02:00.156461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:25.043 [2024-12-09 23:02:00.156711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.043 [2024-12-09 23:02:00.156775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:25.043 [2024-12-09 23:02:00.157521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.043 [2024-12-09 23:02:00.160841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.043 [2024-12-09 23:02:00.161114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:25.043 [2024-12-09 23:02:00.161375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:25.043 [2024-12-09 23:02:00.161482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:25.043 pt1 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.043 "name": "raid_bdev1", 00:20:25.043 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:25.043 "strip_size_kb": 64, 00:20:25.043 "state": "configuring", 00:20:25.043 "raid_level": "raid0", 00:20:25.043 "superblock": true, 00:20:25.043 "num_base_bdevs": 3, 00:20:25.043 "num_base_bdevs_discovered": 1, 00:20:25.043 "num_base_bdevs_operational": 3, 00:20:25.043 "base_bdevs_list": [ 00:20:25.043 { 00:20:25.043 "name": "pt1", 00:20:25.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.043 "is_configured": true, 00:20:25.043 "data_offset": 2048, 00:20:25.043 "data_size": 63488 00:20:25.043 }, 00:20:25.043 { 00:20:25.043 "name": null, 00:20:25.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.043 "is_configured": false, 00:20:25.043 "data_offset": 2048, 00:20:25.043 "data_size": 63488 00:20:25.043 }, 00:20:25.043 { 00:20:25.043 "name": null, 00:20:25.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.043 "is_configured": false, 00:20:25.043 "data_offset": 2048, 00:20:25.043 "data_size": 63488 00:20:25.043 } 00:20:25.043 ] 00:20:25.043 }' 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.043 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.305 [2024-12-09 23:02:00.489657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.305 [2024-12-09 23:02:00.489784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.305 [2024-12-09 23:02:00.489825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:25.305 [2024-12-09 23:02:00.489840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.305 [2024-12-09 23:02:00.490635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.305 [2024-12-09 23:02:00.490667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.305 [2024-12-09 23:02:00.490840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:25.305 [2024-12-09 23:02:00.490881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.305 pt2 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.305 [2024-12-09 23:02:00.497656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.305 "name": "raid_bdev1", 00:20:25.305 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:25.305 "strip_size_kb": 64, 00:20:25.305 "state": "configuring", 00:20:25.305 "raid_level": "raid0", 00:20:25.305 "superblock": true, 00:20:25.305 "num_base_bdevs": 3, 00:20:25.305 "num_base_bdevs_discovered": 1, 00:20:25.305 "num_base_bdevs_operational": 3, 00:20:25.305 "base_bdevs_list": [ 00:20:25.305 { 00:20:25.305 "name": "pt1", 00:20:25.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.305 "is_configured": true, 00:20:25.305 "data_offset": 2048, 00:20:25.305 "data_size": 63488 00:20:25.305 }, 00:20:25.305 { 00:20:25.305 "name": null, 00:20:25.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.305 "is_configured": false, 00:20:25.305 "data_offset": 0, 00:20:25.305 "data_size": 63488 00:20:25.305 }, 00:20:25.305 { 00:20:25.305 "name": null, 00:20:25.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.305 "is_configured": false, 00:20:25.305 "data_offset": 2048, 00:20:25.305 "data_size": 63488 00:20:25.305 } 00:20:25.305 ] 00:20:25.305 }' 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.305 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 [2024-12-09 23:02:00.809725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.587 [2024-12-09 23:02:00.809822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.587 [2024-12-09 23:02:00.809844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:25.587 [2024-12-09 23:02:00.809857] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.587 [2024-12-09 23:02:00.810428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.587 [2024-12-09 23:02:00.810456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.587 [2024-12-09 23:02:00.810595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:25.587 [2024-12-09 23:02:00.810640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.587 pt2 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.587 [2024-12-09 23:02:00.817671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:25.587 [2024-12-09 23:02:00.817913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.587 [2024-12-09 23:02:00.817958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:25.587 [2024-12-09 23:02:00.818510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.587 [2024-12-09 23:02:00.819218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.587 [2024-12-09 23:02:00.819275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:25.587 [2024-12-09 23:02:00.819408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:25.587 [2024-12-09 23:02:00.819443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:25.587 [2024-12-09 23:02:00.819646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:25.587 [2024-12-09 23:02:00.819670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:25.587 [2024-12-09 23:02:00.819969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:25.587 [2024-12-09 23:02:00.820156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:25.587 [2024-12-09 23:02:00.820172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:25.587 [2024-12-09 23:02:00.820333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.587 pt3 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:25.587 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.588 "name": "raid_bdev1", 00:20:25.588 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:25.588 "strip_size_kb": 64, 00:20:25.588 "state": "online", 00:20:25.588 "raid_level": "raid0", 00:20:25.588 "superblock": true, 00:20:25.588 "num_base_bdevs": 3, 00:20:25.588 "num_base_bdevs_discovered": 3, 00:20:25.588 "num_base_bdevs_operational": 3, 00:20:25.588 "base_bdevs_list": [ 00:20:25.588 { 00:20:25.588 "name": "pt1", 00:20:25.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.588 "is_configured": true, 00:20:25.588 "data_offset": 2048, 00:20:25.588 "data_size": 63488 00:20:25.588 }, 00:20:25.588 { 00:20:25.588 "name": "pt2", 00:20:25.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.588 "is_configured": true, 00:20:25.588 "data_offset": 2048, 00:20:25.588 "data_size": 63488 00:20:25.588 }, 00:20:25.588 { 00:20:25.588 "name": "pt3", 00:20:25.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.588 "is_configured": true, 00:20:25.588 "data_offset": 2048, 00:20:25.588 "data_size": 63488 00:20:25.588 } 00:20:25.588 ] 00:20:25.588 }' 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.588 23:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:25.849 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.850 [2024-12-09 23:02:01.170277] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.850 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.850 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:25.850 "name": "raid_bdev1", 00:20:25.850 "aliases": [ 00:20:25.850 "f495298a-05bd-41dd-a8ff-124da6b38c0e" 00:20:25.850 ], 00:20:25.850 "product_name": "Raid Volume", 00:20:25.850 "block_size": 512, 00:20:25.850 "num_blocks": 190464, 00:20:25.850 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:25.850 "assigned_rate_limits": { 00:20:25.850 "rw_ios_per_sec": 0, 00:20:25.850 "rw_mbytes_per_sec": 0, 00:20:25.850 "r_mbytes_per_sec": 0, 00:20:25.850 "w_mbytes_per_sec": 0 00:20:25.850 }, 00:20:25.850 "claimed": false, 00:20:25.850 "zoned": false, 00:20:25.850 "supported_io_types": { 00:20:25.850 "read": true, 00:20:25.850 "write": true, 00:20:25.850 "unmap": true, 00:20:25.850 "flush": true, 00:20:25.850 "reset": true, 00:20:25.850 "nvme_admin": false, 00:20:25.850 "nvme_io": false, 00:20:25.850 "nvme_io_md": false, 00:20:25.850 "write_zeroes": true, 00:20:25.850 "zcopy": false, 00:20:25.850 "get_zone_info": false, 00:20:25.850 "zone_management": false, 00:20:25.850 "zone_append": false, 00:20:25.850 "compare": false, 00:20:25.850 "compare_and_write": false, 00:20:25.850 "abort": false, 00:20:25.850 "seek_hole": false, 00:20:25.850 "seek_data": false, 00:20:25.850 "copy": false, 00:20:25.850 "nvme_iov_md": false 00:20:25.850 }, 00:20:25.850 "memory_domains": [ 00:20:25.850 { 00:20:25.850 "dma_device_id": "system", 00:20:25.850 "dma_device_type": 1 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.850 "dma_device_type": 2 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "dma_device_id": "system", 00:20:25.850 "dma_device_type": 1 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.850 "dma_device_type": 2 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "dma_device_id": "system", 00:20:25.850 "dma_device_type": 1 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.850 "dma_device_type": 2 00:20:25.850 } 00:20:25.850 ], 00:20:25.850 "driver_specific": { 00:20:25.850 "raid": { 00:20:25.850 "uuid": "f495298a-05bd-41dd-a8ff-124da6b38c0e", 00:20:25.850 "strip_size_kb": 64, 00:20:25.850 "state": "online", 00:20:25.850 "raid_level": "raid0", 00:20:25.850 "superblock": true, 00:20:25.850 "num_base_bdevs": 3, 00:20:25.850 "num_base_bdevs_discovered": 3, 00:20:25.850 "num_base_bdevs_operational": 3, 00:20:25.850 "base_bdevs_list": [ 00:20:25.850 { 00:20:25.850 "name": "pt1", 00:20:25.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.850 "is_configured": true, 00:20:25.850 "data_offset": 2048, 00:20:25.850 "data_size": 63488 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "name": "pt2", 00:20:25.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.850 "is_configured": true, 00:20:25.850 "data_offset": 2048, 00:20:25.850 "data_size": 63488 00:20:25.850 }, 00:20:25.850 { 00:20:25.850 "name": "pt3", 00:20:25.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.850 "is_configured": true, 00:20:25.850 "data_offset": 2048, 00:20:25.850 "data_size": 63488 00:20:25.850 } 00:20:25.850 ] 00:20:25.850 } 00:20:25.850 } 00:20:25.850 }' 00:20:25.850 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:26.110 pt2 00:20:26.110 pt3' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:26.110 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.111 [2024-12-09 23:02:01.382213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f495298a-05bd-41dd-a8ff-124da6b38c0e '!=' f495298a-05bd-41dd-a8ff-124da6b38c0e ']' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63523 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63523 ']' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63523 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63523 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63523' 00:20:26.111 killing process with pid 63523 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63523 00:20:26.111 23:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63523 00:20:26.111 [2024-12-09 23:02:01.440533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.111 [2024-12-09 23:02:01.440679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.111 [2024-12-09 23:02:01.440763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.111 [2024-12-09 23:02:01.440778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:26.372 [2024-12-09 23:02:01.665830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.315 23:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:27.315 ************************************ 00:20:27.315 END TEST raid_superblock_test 00:20:27.315 ************************************ 00:20:27.315 00:20:27.315 real 0m4.172s 00:20:27.315 user 0m5.757s 00:20:27.315 sys 0m0.786s 00:20:27.316 23:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.316 23:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.316 23:02:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:20:27.316 23:02:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:27.316 23:02:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.316 23:02:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.316 ************************************ 00:20:27.316 START TEST raid_read_error_test 00:20:27.316 ************************************ 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dmkH0i5vbm 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63765 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63765 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:27.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63765 ']' 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.316 23:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.576 [2024-12-09 23:02:02.689860] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:27.576 [2024-12-09 23:02:02.690450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63765 ] 00:20:27.576 [2024-12-09 23:02:02.911256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.837 [2024-12-09 23:02:03.081397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.098 [2024-12-09 23:02:03.274575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.098 [2024-12-09 23:02:03.274638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 BaseBdev1_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 true 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 [2024-12-09 23:02:03.605572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:28.359 [2024-12-09 23:02:03.605863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.359 [2024-12-09 23:02:03.605928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:28.359 [2024-12-09 23:02:03.605945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.359 [2024-12-09 23:02:03.608916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.359 BaseBdev1 00:20:28.359 [2024-12-09 23:02:03.609171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 BaseBdev2_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 true 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 [2024-12-09 23:02:03.670193] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:28.359 [2024-12-09 23:02:03.670290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.359 [2024-12-09 23:02:03.670323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:28.359 [2024-12-09 23:02:03.670337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.359 [2024-12-09 23:02:03.673588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.359 [2024-12-09 23:02:03.673663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:28.359 BaseBdev2 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.359 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.619 BaseBdev3_malloc 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.619 true 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.619 [2024-12-09 23:02:03.741339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:28.619 [2024-12-09 23:02:03.741418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.619 [2024-12-09 23:02:03.741442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:28.619 [2024-12-09 23:02:03.741454] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.619 [2024-12-09 23:02:03.744140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.619 [2024-12-09 23:02:03.744203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:28.619 BaseBdev3 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.619 [2024-12-09 23:02:03.749440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.619 [2024-12-09 23:02:03.751909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.619 [2024-12-09 23:02:03.752012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.619 [2024-12-09 23:02:03.752285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:28.619 [2024-12-09 23:02:03.752302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:28.619 [2024-12-09 23:02:03.752660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:28.619 [2024-12-09 23:02:03.752923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:28.619 [2024-12-09 23:02:03.752945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:28.619 [2024-12-09 23:02:03.753173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.619 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.619 "name": "raid_bdev1", 00:20:28.619 "uuid": "deb09ed0-3077-4edc-83f2-ff2317175ae1", 00:20:28.619 "strip_size_kb": 64, 00:20:28.619 "state": "online", 00:20:28.619 "raid_level": "raid0", 00:20:28.619 "superblock": true, 00:20:28.619 "num_base_bdevs": 3, 00:20:28.619 "num_base_bdevs_discovered": 3, 00:20:28.619 "num_base_bdevs_operational": 3, 00:20:28.619 "base_bdevs_list": [ 00:20:28.619 { 00:20:28.619 "name": "BaseBdev1", 00:20:28.620 "uuid": "df245fa3-f658-5909-8437-6ae597366084", 00:20:28.620 "is_configured": true, 00:20:28.620 "data_offset": 2048, 00:20:28.620 "data_size": 63488 00:20:28.620 }, 00:20:28.620 { 00:20:28.620 "name": "BaseBdev2", 00:20:28.620 "uuid": "126294da-21a0-5985-ae0b-e90c1fa61dac", 00:20:28.620 "is_configured": true, 00:20:28.620 "data_offset": 2048, 00:20:28.620 "data_size": 63488 00:20:28.620 }, 00:20:28.620 { 00:20:28.620 "name": "BaseBdev3", 00:20:28.620 "uuid": "2dbab9f7-f33f-564b-bac2-611452caaf35", 00:20:28.620 "is_configured": true, 00:20:28.620 "data_offset": 2048, 00:20:28.620 "data_size": 63488 00:20:28.620 } 00:20:28.620 ] 00:20:28.620 }' 00:20:28.620 23:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.620 23:02:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.880 23:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:28.880 23:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:28.880 [2024-12-09 23:02:04.210716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:29.942 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:29.942 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.942 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.942 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.942 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.943 "name": "raid_bdev1", 00:20:29.943 "uuid": "deb09ed0-3077-4edc-83f2-ff2317175ae1", 00:20:29.943 "strip_size_kb": 64, 00:20:29.943 "state": "online", 00:20:29.943 "raid_level": "raid0", 00:20:29.943 "superblock": true, 00:20:29.943 "num_base_bdevs": 3, 00:20:29.943 "num_base_bdevs_discovered": 3, 00:20:29.943 "num_base_bdevs_operational": 3, 00:20:29.943 "base_bdevs_list": [ 00:20:29.943 { 00:20:29.943 "name": "BaseBdev1", 00:20:29.943 "uuid": "df245fa3-f658-5909-8437-6ae597366084", 00:20:29.943 "is_configured": true, 00:20:29.943 "data_offset": 2048, 00:20:29.943 "data_size": 63488 00:20:29.943 }, 00:20:29.943 { 00:20:29.943 "name": "BaseBdev2", 00:20:29.943 "uuid": "126294da-21a0-5985-ae0b-e90c1fa61dac", 00:20:29.943 "is_configured": true, 00:20:29.943 "data_offset": 2048, 00:20:29.943 "data_size": 63488 00:20:29.943 }, 00:20:29.943 { 00:20:29.943 "name": "BaseBdev3", 00:20:29.943 "uuid": "2dbab9f7-f33f-564b-bac2-611452caaf35", 00:20:29.943 "is_configured": true, 00:20:29.943 "data_offset": 2048, 00:20:29.943 "data_size": 63488 00:20:29.943 } 00:20:29.943 ] 00:20:29.943 }' 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.943 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.203 [2024-12-09 23:02:05.450024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.203 [2024-12-09 23:02:05.450256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.203 [2024-12-09 23:02:05.453595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.203 [2024-12-09 23:02:05.453804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.203 [2024-12-09 23:02:05.453886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.203 [2024-12-09 23:02:05.454010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:30.203 { 00:20:30.203 "results": [ 00:20:30.203 { 00:20:30.203 "job": "raid_bdev1", 00:20:30.203 "core_mask": "0x1", 00:20:30.203 "workload": "randrw", 00:20:30.203 "percentage": 50, 00:20:30.203 "status": "finished", 00:20:30.203 "queue_depth": 1, 00:20:30.203 "io_size": 131072, 00:20:30.203 "runtime": 1.237182, 00:20:30.203 "iops": 12118.669686432553, 00:20:30.203 "mibps": 1514.8337108040691, 00:20:30.203 "io_failed": 1, 00:20:30.203 "io_timeout": 0, 00:20:30.203 "avg_latency_us": 114.35955715619579, 00:20:30.203 "min_latency_us": 34.26461538461538, 00:20:30.203 "max_latency_us": 1726.6215384615384 00:20:30.203 } 00:20:30.203 ], 00:20:30.203 "core_count": 1 00:20:30.203 } 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63765 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63765 ']' 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63765 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63765 00:20:30.203 killing process with pid 63765 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63765' 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63765 00:20:30.203 23:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63765 00:20:30.203 [2024-12-09 23:02:05.484735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.472 [2024-12-09 23:02:05.648434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dmkH0i5vbm 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:20:31.414 ************************************ 00:20:31.414 END TEST raid_read_error_test 00:20:31.414 ************************************ 00:20:31.414 00:20:31.414 real 0m3.939s 00:20:31.414 user 0m4.540s 00:20:31.414 sys 0m0.555s 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.414 23:02:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.414 23:02:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:31.414 23:02:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:31.414 23:02:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.414 23:02:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.414 ************************************ 00:20:31.414 START TEST raid_write_error_test 00:20:31.414 ************************************ 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VuskwaxAbq 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63905 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63905 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63905 ']' 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.414 23:02:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.414 [2024-12-09 23:02:06.665753] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:31.414 [2024-12-09 23:02:06.665922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:20:31.722 [2024-12-09 23:02:06.828918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.722 [2024-12-09 23:02:06.969013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.981 [2024-12-09 23:02:07.136326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.981 [2024-12-09 23:02:07.136396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.241 BaseBdev1_malloc 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.241 true 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.241 [2024-12-09 23:02:07.596207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:32.241 [2024-12-09 23:02:07.596289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.241 [2024-12-09 23:02:07.596313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:32.241 [2024-12-09 23:02:07.596325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.241 [2024-12-09 23:02:07.598893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.241 [2024-12-09 23:02:07.599170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.241 BaseBdev1 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.241 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 BaseBdev2_malloc 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 true 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 [2024-12-09 23:02:07.650742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:32.501 [2024-12-09 23:02:07.650980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.501 [2024-12-09 23:02:07.651030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:32.501 [2024-12-09 23:02:07.651113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.501 [2024-12-09 23:02:07.653678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.501 [2024-12-09 23:02:07.653754] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:32.501 BaseBdev2 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 BaseBdev3_malloc 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 true 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 [2024-12-09 23:02:07.724706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:32.501 [2024-12-09 23:02:07.725006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.501 [2024-12-09 23:02:07.725133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:32.501 [2024-12-09 23:02:07.725292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.501 [2024-12-09 23:02:07.728375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.501 [2024-12-09 23:02:07.728591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:32.501 BaseBdev3 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 [2024-12-09 23:02:07.732898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.501 [2024-12-09 23:02:07.735559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.501 [2024-12-09 23:02:07.735876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.501 [2024-12-09 23:02:07.736200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:32.501 [2024-12-09 23:02:07.736223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:32.501 [2024-12-09 23:02:07.736555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:32.501 [2024-12-09 23:02:07.736731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:32.501 [2024-12-09 23:02:07.736745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:32.501 [2024-12-09 23:02:07.737003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.501 "name": "raid_bdev1", 00:20:32.501 "uuid": "6cafb2fb-1cf6-4f54-ad5e-3316dbb8a739", 00:20:32.501 "strip_size_kb": 64, 00:20:32.501 "state": "online", 00:20:32.501 "raid_level": "raid0", 00:20:32.501 "superblock": true, 00:20:32.501 "num_base_bdevs": 3, 00:20:32.501 "num_base_bdevs_discovered": 3, 00:20:32.501 "num_base_bdevs_operational": 3, 00:20:32.502 "base_bdevs_list": [ 00:20:32.502 { 00:20:32.502 "name": "BaseBdev1", 00:20:32.502 "uuid": "689e8b78-7424-5caf-a57b-3eb9b57ac042", 00:20:32.502 "is_configured": true, 00:20:32.502 "data_offset": 2048, 00:20:32.502 "data_size": 63488 00:20:32.502 }, 00:20:32.502 { 00:20:32.502 "name": "BaseBdev2", 00:20:32.502 "uuid": "6e07931f-ba83-5420-b962-95c61d4176a1", 00:20:32.502 "is_configured": true, 00:20:32.502 "data_offset": 2048, 00:20:32.502 "data_size": 63488 00:20:32.502 }, 00:20:32.502 { 00:20:32.502 "name": "BaseBdev3", 00:20:32.502 "uuid": "3cc2d8b7-2d98-5202-9280-d976b5351270", 00:20:32.502 "is_configured": true, 00:20:32.502 "data_offset": 2048, 00:20:32.502 "data_size": 63488 00:20:32.502 } 00:20:32.502 ] 00:20:32.502 }' 00:20:32.502 23:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.502 23:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.761 23:02:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:32.761 23:02:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:33.020 [2024-12-09 23:02:08.178604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.961 "name": "raid_bdev1", 00:20:33.961 "uuid": "6cafb2fb-1cf6-4f54-ad5e-3316dbb8a739", 00:20:33.961 "strip_size_kb": 64, 00:20:33.961 "state": "online", 00:20:33.961 "raid_level": "raid0", 00:20:33.961 "superblock": true, 00:20:33.961 "num_base_bdevs": 3, 00:20:33.961 "num_base_bdevs_discovered": 3, 00:20:33.961 "num_base_bdevs_operational": 3, 00:20:33.961 "base_bdevs_list": [ 00:20:33.961 { 00:20:33.961 "name": "BaseBdev1", 00:20:33.961 "uuid": "689e8b78-7424-5caf-a57b-3eb9b57ac042", 00:20:33.961 "is_configured": true, 00:20:33.961 "data_offset": 2048, 00:20:33.961 "data_size": 63488 00:20:33.961 }, 00:20:33.961 { 00:20:33.961 "name": "BaseBdev2", 00:20:33.961 "uuid": "6e07931f-ba83-5420-b962-95c61d4176a1", 00:20:33.961 "is_configured": true, 00:20:33.961 "data_offset": 2048, 00:20:33.961 "data_size": 63488 00:20:33.961 }, 00:20:33.961 { 00:20:33.961 "name": "BaseBdev3", 00:20:33.961 "uuid": "3cc2d8b7-2d98-5202-9280-d976b5351270", 00:20:33.961 "is_configured": true, 00:20:33.961 "data_offset": 2048, 00:20:33.961 "data_size": 63488 00:20:33.961 } 00:20:33.961 ] 00:20:33.961 }' 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.961 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.222 [2024-12-09 23:02:09.427536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.222 [2024-12-09 23:02:09.427783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.222 { 00:20:34.222 "results": [ 00:20:34.222 { 00:20:34.222 "job": "raid_bdev1", 00:20:34.222 "core_mask": "0x1", 00:20:34.222 "workload": "randrw", 00:20:34.222 "percentage": 50, 00:20:34.222 "status": "finished", 00:20:34.222 "queue_depth": 1, 00:20:34.222 "io_size": 131072, 00:20:34.222 "runtime": 1.246412, 00:20:34.222 "iops": 10470.053240822457, 00:20:34.222 "mibps": 1308.7566551028071, 00:20:34.222 "io_failed": 1, 00:20:34.222 "io_timeout": 0, 00:20:34.222 "avg_latency_us": 132.6398236504129, 00:20:34.222 "min_latency_us": 34.26461538461538, 00:20:34.222 "max_latency_us": 2054.3015384615383 00:20:34.222 } 00:20:34.222 ], 00:20:34.222 "core_count": 1 00:20:34.222 } 00:20:34.222 [2024-12-09 23:02:09.431878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.222 [2024-12-09 23:02:09.432036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.222 [2024-12-09 23:02:09.432129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.222 [2024-12-09 23:02:09.432146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63905 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63905 ']' 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63905 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63905 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.222 killing process with pid 63905 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63905' 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63905 00:20:34.222 23:02:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63905 00:20:34.222 [2024-12-09 23:02:09.469000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:34.482 [2024-12-09 23:02:09.657881] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VuskwaxAbq 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:35.420 ************************************ 00:20:35.420 END TEST raid_write_error_test 00:20:35.420 ************************************ 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:20:35.420 00:20:35.420 real 0m4.046s 00:20:35.420 user 0m4.629s 00:20:35.420 sys 0m0.562s 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.420 23:02:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 23:02:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:35.420 23:02:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:35.420 23:02:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:35.420 23:02:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.420 23:02:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 ************************************ 00:20:35.420 START TEST raid_state_function_test 00:20:35.420 ************************************ 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:35.420 Process raid pid: 64043 00:20:35.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64043 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64043' 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64043 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64043 ']' 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.420 23:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.420 [2024-12-09 23:02:10.765871] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:35.420 [2024-12-09 23:02:10.766017] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.678 [2024-12-09 23:02:10.934465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.937 [2024-12-09 23:02:11.090660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.937 [2024-12-09 23:02:11.269472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.937 [2024-12-09 23:02:11.269774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.510 [2024-12-09 23:02:11.659831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.510 [2024-12-09 23:02:11.660075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.510 [2024-12-09 23:02:11.660239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.510 [2024-12-09 23:02:11.660278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.510 [2024-12-09 23:02:11.660300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.510 [2024-12-09 23:02:11.660325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.510 "name": "Existed_Raid", 00:20:36.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.510 "strip_size_kb": 64, 00:20:36.510 "state": "configuring", 00:20:36.510 "raid_level": "concat", 00:20:36.510 "superblock": false, 00:20:36.510 "num_base_bdevs": 3, 00:20:36.510 "num_base_bdevs_discovered": 0, 00:20:36.510 "num_base_bdevs_operational": 3, 00:20:36.510 "base_bdevs_list": [ 00:20:36.510 { 00:20:36.510 "name": "BaseBdev1", 00:20:36.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.510 "is_configured": false, 00:20:36.510 "data_offset": 0, 00:20:36.510 "data_size": 0 00:20:36.510 }, 00:20:36.510 { 00:20:36.510 "name": "BaseBdev2", 00:20:36.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.510 "is_configured": false, 00:20:36.510 "data_offset": 0, 00:20:36.510 "data_size": 0 00:20:36.510 }, 00:20:36.510 { 00:20:36.510 "name": "BaseBdev3", 00:20:36.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.510 "is_configured": false, 00:20:36.510 "data_offset": 0, 00:20:36.510 "data_size": 0 00:20:36.510 } 00:20:36.510 ] 00:20:36.510 }' 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.510 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 23:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:36.816 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.816 23:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 [2024-12-09 23:02:11.999828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.816 [2024-12-09 23:02:11.999878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 [2024-12-09 23:02:12.007844] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.816 [2024-12-09 23:02:12.008066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.816 [2024-12-09 23:02:12.008633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.816 [2024-12-09 23:02:12.008698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.816 [2024-12-09 23:02:12.008707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.816 [2024-12-09 23:02:12.008719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 [2024-12-09 23:02:12.060123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.816 BaseBdev1 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 [ 00:20:36.816 { 00:20:36.816 "name": "BaseBdev1", 00:20:36.816 "aliases": [ 00:20:36.816 "a0656686-c3ef-4e6c-a0e4-a958000edfba" 00:20:36.816 ], 00:20:36.816 "product_name": "Malloc disk", 00:20:36.816 "block_size": 512, 00:20:36.816 "num_blocks": 65536, 00:20:36.816 "uuid": "a0656686-c3ef-4e6c-a0e4-a958000edfba", 00:20:36.816 "assigned_rate_limits": { 00:20:36.816 "rw_ios_per_sec": 0, 00:20:36.816 "rw_mbytes_per_sec": 0, 00:20:36.816 "r_mbytes_per_sec": 0, 00:20:36.816 "w_mbytes_per_sec": 0 00:20:36.816 }, 00:20:36.816 "claimed": true, 00:20:36.816 "claim_type": "exclusive_write", 00:20:36.816 "zoned": false, 00:20:36.816 "supported_io_types": { 00:20:36.816 "read": true, 00:20:36.816 "write": true, 00:20:36.816 "unmap": true, 00:20:36.816 "flush": true, 00:20:36.816 "reset": true, 00:20:36.816 "nvme_admin": false, 00:20:36.816 "nvme_io": false, 00:20:36.816 "nvme_io_md": false, 00:20:36.816 "write_zeroes": true, 00:20:36.816 "zcopy": true, 00:20:36.816 "get_zone_info": false, 00:20:36.816 "zone_management": false, 00:20:36.816 "zone_append": false, 00:20:36.816 "compare": false, 00:20:36.816 "compare_and_write": false, 00:20:36.816 "abort": true, 00:20:36.816 "seek_hole": false, 00:20:36.816 "seek_data": false, 00:20:36.816 "copy": true, 00:20:36.816 "nvme_iov_md": false 00:20:36.816 }, 00:20:36.816 "memory_domains": [ 00:20:36.816 { 00:20:36.816 "dma_device_id": "system", 00:20:36.816 "dma_device_type": 1 00:20:36.816 }, 00:20:36.816 { 00:20:36.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.816 "dma_device_type": 2 00:20:36.816 } 00:20:36.816 ], 00:20:36.816 "driver_specific": {} 00:20:36.816 } 00:20:36.816 ] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.816 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.816 "name": "Existed_Raid", 00:20:36.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.816 "strip_size_kb": 64, 00:20:36.816 "state": "configuring", 00:20:36.816 "raid_level": "concat", 00:20:36.816 "superblock": false, 00:20:36.816 "num_base_bdevs": 3, 00:20:36.816 "num_base_bdevs_discovered": 1, 00:20:36.816 "num_base_bdevs_operational": 3, 00:20:36.816 "base_bdevs_list": [ 00:20:36.816 { 00:20:36.816 "name": "BaseBdev1", 00:20:36.816 "uuid": "a0656686-c3ef-4e6c-a0e4-a958000edfba", 00:20:36.816 "is_configured": true, 00:20:36.816 "data_offset": 0, 00:20:36.816 "data_size": 65536 00:20:36.816 }, 00:20:36.816 { 00:20:36.816 "name": "BaseBdev2", 00:20:36.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.817 "is_configured": false, 00:20:36.817 "data_offset": 0, 00:20:36.817 "data_size": 0 00:20:36.817 }, 00:20:36.817 { 00:20:36.817 "name": "BaseBdev3", 00:20:36.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.817 "is_configured": false, 00:20:36.817 "data_offset": 0, 00:20:36.817 "data_size": 0 00:20:36.817 } 00:20:36.817 ] 00:20:36.817 }' 00:20:36.817 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.817 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.094 [2024-12-09 23:02:12.400185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.094 [2024-12-09 23:02:12.400408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.094 [2024-12-09 23:02:12.408249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.094 [2024-12-09 23:02:12.410631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.094 [2024-12-09 23:02:12.410825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.094 [2024-12-09 23:02:12.410908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:37.094 [2024-12-09 23:02:12.410938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.094 "name": "Existed_Raid", 00:20:37.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.094 "strip_size_kb": 64, 00:20:37.094 "state": "configuring", 00:20:37.094 "raid_level": "concat", 00:20:37.094 "superblock": false, 00:20:37.094 "num_base_bdevs": 3, 00:20:37.094 "num_base_bdevs_discovered": 1, 00:20:37.094 "num_base_bdevs_operational": 3, 00:20:37.094 "base_bdevs_list": [ 00:20:37.094 { 00:20:37.094 "name": "BaseBdev1", 00:20:37.094 "uuid": "a0656686-c3ef-4e6c-a0e4-a958000edfba", 00:20:37.094 "is_configured": true, 00:20:37.094 "data_offset": 0, 00:20:37.094 "data_size": 65536 00:20:37.094 }, 00:20:37.094 { 00:20:37.094 "name": "BaseBdev2", 00:20:37.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.094 "is_configured": false, 00:20:37.094 "data_offset": 0, 00:20:37.094 "data_size": 0 00:20:37.094 }, 00:20:37.094 { 00:20:37.094 "name": "BaseBdev3", 00:20:37.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.094 "is_configured": false, 00:20:37.094 "data_offset": 0, 00:20:37.094 "data_size": 0 00:20:37.094 } 00:20:37.094 ] 00:20:37.094 }' 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.094 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.354 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:37.354 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.354 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.616 [2024-12-09 23:02:12.747933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.616 BaseBdev2 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.616 [ 00:20:37.616 { 00:20:37.616 "name": "BaseBdev2", 00:20:37.616 "aliases": [ 00:20:37.616 "2ac7f6e5-a361-4ddc-a8c5-c7e1170b9361" 00:20:37.616 ], 00:20:37.616 "product_name": "Malloc disk", 00:20:37.616 "block_size": 512, 00:20:37.616 "num_blocks": 65536, 00:20:37.616 "uuid": "2ac7f6e5-a361-4ddc-a8c5-c7e1170b9361", 00:20:37.616 "assigned_rate_limits": { 00:20:37.616 "rw_ios_per_sec": 0, 00:20:37.616 "rw_mbytes_per_sec": 0, 00:20:37.616 "r_mbytes_per_sec": 0, 00:20:37.616 "w_mbytes_per_sec": 0 00:20:37.616 }, 00:20:37.616 "claimed": true, 00:20:37.616 "claim_type": "exclusive_write", 00:20:37.616 "zoned": false, 00:20:37.616 "supported_io_types": { 00:20:37.616 "read": true, 00:20:37.616 "write": true, 00:20:37.616 "unmap": true, 00:20:37.616 "flush": true, 00:20:37.616 "reset": true, 00:20:37.616 "nvme_admin": false, 00:20:37.616 "nvme_io": false, 00:20:37.616 "nvme_io_md": false, 00:20:37.616 "write_zeroes": true, 00:20:37.616 "zcopy": true, 00:20:37.616 "get_zone_info": false, 00:20:37.616 "zone_management": false, 00:20:37.616 "zone_append": false, 00:20:37.616 "compare": false, 00:20:37.616 "compare_and_write": false, 00:20:37.616 "abort": true, 00:20:37.616 "seek_hole": false, 00:20:37.616 "seek_data": false, 00:20:37.616 "copy": true, 00:20:37.616 "nvme_iov_md": false 00:20:37.616 }, 00:20:37.616 "memory_domains": [ 00:20:37.616 { 00:20:37.616 "dma_device_id": "system", 00:20:37.616 "dma_device_type": 1 00:20:37.616 }, 00:20:37.616 { 00:20:37.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.616 "dma_device_type": 2 00:20:37.616 } 00:20:37.616 ], 00:20:37.616 "driver_specific": {} 00:20:37.616 } 00:20:37.616 ] 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.616 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.617 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.617 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.617 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.617 "name": "Existed_Raid", 00:20:37.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.617 "strip_size_kb": 64, 00:20:37.617 "state": "configuring", 00:20:37.617 "raid_level": "concat", 00:20:37.617 "superblock": false, 00:20:37.617 "num_base_bdevs": 3, 00:20:37.617 "num_base_bdevs_discovered": 2, 00:20:37.617 "num_base_bdevs_operational": 3, 00:20:37.617 "base_bdevs_list": [ 00:20:37.617 { 00:20:37.617 "name": "BaseBdev1", 00:20:37.617 "uuid": "a0656686-c3ef-4e6c-a0e4-a958000edfba", 00:20:37.617 "is_configured": true, 00:20:37.617 "data_offset": 0, 00:20:37.617 "data_size": 65536 00:20:37.617 }, 00:20:37.617 { 00:20:37.617 "name": "BaseBdev2", 00:20:37.617 "uuid": "2ac7f6e5-a361-4ddc-a8c5-c7e1170b9361", 00:20:37.617 "is_configured": true, 00:20:37.617 "data_offset": 0, 00:20:37.617 "data_size": 65536 00:20:37.617 }, 00:20:37.617 { 00:20:37.617 "name": "BaseBdev3", 00:20:37.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.617 "is_configured": false, 00:20:37.617 "data_offset": 0, 00:20:37.617 "data_size": 0 00:20:37.617 } 00:20:37.617 ] 00:20:37.617 }' 00:20:37.617 23:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.617 23:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.878 [2024-12-09 23:02:13.162405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.878 [2024-12-09 23:02:13.162645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:37.878 [2024-12-09 23:02:13.162691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:37.878 [2024-12-09 23:02:13.163222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:37.878 [2024-12-09 23:02:13.163524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:37.878 [2024-12-09 23:02:13.163621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:37.878 [2024-12-09 23:02:13.163971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.878 BaseBdev3 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.878 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.878 [ 00:20:37.878 { 00:20:37.878 "name": "BaseBdev3", 00:20:37.878 "aliases": [ 00:20:37.879 "4fd64ceb-2485-404a-93dd-70cf169bc671" 00:20:37.879 ], 00:20:37.879 "product_name": "Malloc disk", 00:20:37.879 "block_size": 512, 00:20:37.879 "num_blocks": 65536, 00:20:37.879 "uuid": "4fd64ceb-2485-404a-93dd-70cf169bc671", 00:20:37.879 "assigned_rate_limits": { 00:20:37.879 "rw_ios_per_sec": 0, 00:20:37.879 "rw_mbytes_per_sec": 0, 00:20:37.879 "r_mbytes_per_sec": 0, 00:20:37.879 "w_mbytes_per_sec": 0 00:20:37.879 }, 00:20:37.879 "claimed": true, 00:20:37.879 "claim_type": "exclusive_write", 00:20:37.879 "zoned": false, 00:20:37.879 "supported_io_types": { 00:20:37.879 "read": true, 00:20:37.879 "write": true, 00:20:37.879 "unmap": true, 00:20:37.879 "flush": true, 00:20:37.879 "reset": true, 00:20:37.879 "nvme_admin": false, 00:20:37.879 "nvme_io": false, 00:20:37.879 "nvme_io_md": false, 00:20:37.879 "write_zeroes": true, 00:20:37.879 "zcopy": true, 00:20:37.879 "get_zone_info": false, 00:20:37.879 "zone_management": false, 00:20:37.879 "zone_append": false, 00:20:37.879 "compare": false, 00:20:37.879 "compare_and_write": false, 00:20:37.879 "abort": true, 00:20:37.879 "seek_hole": false, 00:20:37.879 "seek_data": false, 00:20:37.879 "copy": true, 00:20:37.879 "nvme_iov_md": false 00:20:37.879 }, 00:20:37.879 "memory_domains": [ 00:20:37.879 { 00:20:37.879 "dma_device_id": "system", 00:20:37.879 "dma_device_type": 1 00:20:37.879 }, 00:20:37.879 { 00:20:37.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.879 "dma_device_type": 2 00:20:37.879 } 00:20:37.879 ], 00:20:37.879 "driver_specific": {} 00:20:37.879 } 00:20:37.879 ] 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.879 "name": "Existed_Raid", 00:20:37.879 "uuid": "435237d0-fcfb-4cbc-8b89-75fc7916fe92", 00:20:37.879 "strip_size_kb": 64, 00:20:37.879 "state": "online", 00:20:37.879 "raid_level": "concat", 00:20:37.879 "superblock": false, 00:20:37.879 "num_base_bdevs": 3, 00:20:37.879 "num_base_bdevs_discovered": 3, 00:20:37.879 "num_base_bdevs_operational": 3, 00:20:37.879 "base_bdevs_list": [ 00:20:37.879 { 00:20:37.879 "name": "BaseBdev1", 00:20:37.879 "uuid": "a0656686-c3ef-4e6c-a0e4-a958000edfba", 00:20:37.879 "is_configured": true, 00:20:37.879 "data_offset": 0, 00:20:37.879 "data_size": 65536 00:20:37.879 }, 00:20:37.879 { 00:20:37.879 "name": "BaseBdev2", 00:20:37.879 "uuid": "2ac7f6e5-a361-4ddc-a8c5-c7e1170b9361", 00:20:37.879 "is_configured": true, 00:20:37.879 "data_offset": 0, 00:20:37.879 "data_size": 65536 00:20:37.879 }, 00:20:37.879 { 00:20:37.879 "name": "BaseBdev3", 00:20:37.879 "uuid": "4fd64ceb-2485-404a-93dd-70cf169bc671", 00:20:37.879 "is_configured": true, 00:20:37.879 "data_offset": 0, 00:20:37.879 "data_size": 65536 00:20:37.879 } 00:20:37.879 ] 00:20:37.879 }' 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.879 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.452 [2024-12-09 23:02:13.510930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.452 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.452 "name": "Existed_Raid", 00:20:38.452 "aliases": [ 00:20:38.452 "435237d0-fcfb-4cbc-8b89-75fc7916fe92" 00:20:38.452 ], 00:20:38.452 "product_name": "Raid Volume", 00:20:38.452 "block_size": 512, 00:20:38.452 "num_blocks": 196608, 00:20:38.452 "uuid": "435237d0-fcfb-4cbc-8b89-75fc7916fe92", 00:20:38.452 "assigned_rate_limits": { 00:20:38.452 "rw_ios_per_sec": 0, 00:20:38.452 "rw_mbytes_per_sec": 0, 00:20:38.452 "r_mbytes_per_sec": 0, 00:20:38.452 "w_mbytes_per_sec": 0 00:20:38.452 }, 00:20:38.452 "claimed": false, 00:20:38.452 "zoned": false, 00:20:38.452 "supported_io_types": { 00:20:38.452 "read": true, 00:20:38.452 "write": true, 00:20:38.452 "unmap": true, 00:20:38.452 "flush": true, 00:20:38.452 "reset": true, 00:20:38.452 "nvme_admin": false, 00:20:38.452 "nvme_io": false, 00:20:38.452 "nvme_io_md": false, 00:20:38.452 "write_zeroes": true, 00:20:38.452 "zcopy": false, 00:20:38.452 "get_zone_info": false, 00:20:38.452 "zone_management": false, 00:20:38.452 "zone_append": false, 00:20:38.452 "compare": false, 00:20:38.452 "compare_and_write": false, 00:20:38.452 "abort": false, 00:20:38.452 "seek_hole": false, 00:20:38.452 "seek_data": false, 00:20:38.452 "copy": false, 00:20:38.452 "nvme_iov_md": false 00:20:38.452 }, 00:20:38.452 "memory_domains": [ 00:20:38.452 { 00:20:38.452 "dma_device_id": "system", 00:20:38.452 "dma_device_type": 1 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.452 "dma_device_type": 2 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "dma_device_id": "system", 00:20:38.452 "dma_device_type": 1 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.452 "dma_device_type": 2 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "dma_device_id": "system", 00:20:38.452 "dma_device_type": 1 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.452 "dma_device_type": 2 00:20:38.452 } 00:20:38.452 ], 00:20:38.452 "driver_specific": { 00:20:38.452 "raid": { 00:20:38.452 "uuid": "435237d0-fcfb-4cbc-8b89-75fc7916fe92", 00:20:38.452 "strip_size_kb": 64, 00:20:38.452 "state": "online", 00:20:38.452 "raid_level": "concat", 00:20:38.452 "superblock": false, 00:20:38.452 "num_base_bdevs": 3, 00:20:38.452 "num_base_bdevs_discovered": 3, 00:20:38.452 "num_base_bdevs_operational": 3, 00:20:38.452 "base_bdevs_list": [ 00:20:38.452 { 00:20:38.452 "name": "BaseBdev1", 00:20:38.452 "uuid": "a0656686-c3ef-4e6c-a0e4-a958000edfba", 00:20:38.452 "is_configured": true, 00:20:38.452 "data_offset": 0, 00:20:38.452 "data_size": 65536 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "name": "BaseBdev2", 00:20:38.452 "uuid": "2ac7f6e5-a361-4ddc-a8c5-c7e1170b9361", 00:20:38.452 "is_configured": true, 00:20:38.452 "data_offset": 0, 00:20:38.452 "data_size": 65536 00:20:38.452 }, 00:20:38.452 { 00:20:38.452 "name": "BaseBdev3", 00:20:38.452 "uuid": "4fd64ceb-2485-404a-93dd-70cf169bc671", 00:20:38.452 "is_configured": true, 00:20:38.452 "data_offset": 0, 00:20:38.452 "data_size": 65536 00:20:38.452 } 00:20:38.452 ] 00:20:38.452 } 00:20:38.453 } 00:20:38.453 }' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:38.453 BaseBdev2 00:20:38.453 BaseBdev3' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.453 [2024-12-09 23:02:13.738691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:38.453 [2024-12-09 23:02:13.738886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.453 [2024-12-09 23:02:13.739027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.453 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.714 "name": "Existed_Raid", 00:20:38.714 "uuid": "435237d0-fcfb-4cbc-8b89-75fc7916fe92", 00:20:38.714 "strip_size_kb": 64, 00:20:38.714 "state": "offline", 00:20:38.714 "raid_level": "concat", 00:20:38.714 "superblock": false, 00:20:38.714 "num_base_bdevs": 3, 00:20:38.714 "num_base_bdevs_discovered": 2, 00:20:38.714 "num_base_bdevs_operational": 2, 00:20:38.714 "base_bdevs_list": [ 00:20:38.714 { 00:20:38.714 "name": null, 00:20:38.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.714 "is_configured": false, 00:20:38.714 "data_offset": 0, 00:20:38.714 "data_size": 65536 00:20:38.714 }, 00:20:38.714 { 00:20:38.714 "name": "BaseBdev2", 00:20:38.714 "uuid": "2ac7f6e5-a361-4ddc-a8c5-c7e1170b9361", 00:20:38.714 "is_configured": true, 00:20:38.714 "data_offset": 0, 00:20:38.714 "data_size": 65536 00:20:38.714 }, 00:20:38.714 { 00:20:38.714 "name": "BaseBdev3", 00:20:38.714 "uuid": "4fd64ceb-2485-404a-93dd-70cf169bc671", 00:20:38.714 "is_configured": true, 00:20:38.714 "data_offset": 0, 00:20:38.714 "data_size": 65536 00:20:38.714 } 00:20:38.714 ] 00:20:38.714 }' 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.714 23:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.975 [2024-12-09 23:02:14.151210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.975 [2024-12-09 23:02:14.261590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:38.975 [2024-12-09 23:02:14.261837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.975 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.236 BaseBdev2 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.236 [ 00:20:39.236 { 00:20:39.236 "name": "BaseBdev2", 00:20:39.236 "aliases": [ 00:20:39.236 "72813997-8608-4363-8841-f3c730184ff7" 00:20:39.236 ], 00:20:39.236 "product_name": "Malloc disk", 00:20:39.236 "block_size": 512, 00:20:39.236 "num_blocks": 65536, 00:20:39.236 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:39.236 "assigned_rate_limits": { 00:20:39.236 "rw_ios_per_sec": 0, 00:20:39.236 "rw_mbytes_per_sec": 0, 00:20:39.236 "r_mbytes_per_sec": 0, 00:20:39.236 "w_mbytes_per_sec": 0 00:20:39.236 }, 00:20:39.236 "claimed": false, 00:20:39.236 "zoned": false, 00:20:39.236 "supported_io_types": { 00:20:39.236 "read": true, 00:20:39.236 "write": true, 00:20:39.236 "unmap": true, 00:20:39.236 "flush": true, 00:20:39.236 "reset": true, 00:20:39.236 "nvme_admin": false, 00:20:39.236 "nvme_io": false, 00:20:39.236 "nvme_io_md": false, 00:20:39.236 "write_zeroes": true, 00:20:39.236 "zcopy": true, 00:20:39.236 "get_zone_info": false, 00:20:39.236 "zone_management": false, 00:20:39.236 "zone_append": false, 00:20:39.236 "compare": false, 00:20:39.236 "compare_and_write": false, 00:20:39.236 "abort": true, 00:20:39.236 "seek_hole": false, 00:20:39.236 "seek_data": false, 00:20:39.236 "copy": true, 00:20:39.236 "nvme_iov_md": false 00:20:39.236 }, 00:20:39.236 "memory_domains": [ 00:20:39.236 { 00:20:39.236 "dma_device_id": "system", 00:20:39.236 "dma_device_type": 1 00:20:39.236 }, 00:20:39.236 { 00:20:39.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.236 "dma_device_type": 2 00:20:39.236 } 00:20:39.236 ], 00:20:39.236 "driver_specific": {} 00:20:39.236 } 00:20:39.236 ] 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:39.236 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.237 BaseBdev3 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.237 [ 00:20:39.237 { 00:20:39.237 "name": "BaseBdev3", 00:20:39.237 "aliases": [ 00:20:39.237 "d3677b20-45dd-4258-ba6e-3d1552a19441" 00:20:39.237 ], 00:20:39.237 "product_name": "Malloc disk", 00:20:39.237 "block_size": 512, 00:20:39.237 "num_blocks": 65536, 00:20:39.237 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:39.237 "assigned_rate_limits": { 00:20:39.237 "rw_ios_per_sec": 0, 00:20:39.237 "rw_mbytes_per_sec": 0, 00:20:39.237 "r_mbytes_per_sec": 0, 00:20:39.237 "w_mbytes_per_sec": 0 00:20:39.237 }, 00:20:39.237 "claimed": false, 00:20:39.237 "zoned": false, 00:20:39.237 "supported_io_types": { 00:20:39.237 "read": true, 00:20:39.237 "write": true, 00:20:39.237 "unmap": true, 00:20:39.237 "flush": true, 00:20:39.237 "reset": true, 00:20:39.237 "nvme_admin": false, 00:20:39.237 "nvme_io": false, 00:20:39.237 "nvme_io_md": false, 00:20:39.237 "write_zeroes": true, 00:20:39.237 "zcopy": true, 00:20:39.237 "get_zone_info": false, 00:20:39.237 "zone_management": false, 00:20:39.237 "zone_append": false, 00:20:39.237 "compare": false, 00:20:39.237 "compare_and_write": false, 00:20:39.237 "abort": true, 00:20:39.237 "seek_hole": false, 00:20:39.237 "seek_data": false, 00:20:39.237 "copy": true, 00:20:39.237 "nvme_iov_md": false 00:20:39.237 }, 00:20:39.237 "memory_domains": [ 00:20:39.237 { 00:20:39.237 "dma_device_id": "system", 00:20:39.237 "dma_device_type": 1 00:20:39.237 }, 00:20:39.237 { 00:20:39.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.237 "dma_device_type": 2 00:20:39.237 } 00:20:39.237 ], 00:20:39.237 "driver_specific": {} 00:20:39.237 } 00:20:39.237 ] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.237 [2024-12-09 23:02:14.476981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:39.237 [2024-12-09 23:02:14.477153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:39.237 [2024-12-09 23:02:14.477232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.237 [2024-12-09 23:02:14.479144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.237 "name": "Existed_Raid", 00:20:39.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.237 "strip_size_kb": 64, 00:20:39.237 "state": "configuring", 00:20:39.237 "raid_level": "concat", 00:20:39.237 "superblock": false, 00:20:39.237 "num_base_bdevs": 3, 00:20:39.237 "num_base_bdevs_discovered": 2, 00:20:39.237 "num_base_bdevs_operational": 3, 00:20:39.237 "base_bdevs_list": [ 00:20:39.237 { 00:20:39.237 "name": "BaseBdev1", 00:20:39.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.237 "is_configured": false, 00:20:39.237 "data_offset": 0, 00:20:39.237 "data_size": 0 00:20:39.237 }, 00:20:39.237 { 00:20:39.237 "name": "BaseBdev2", 00:20:39.237 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:39.237 "is_configured": true, 00:20:39.237 "data_offset": 0, 00:20:39.237 "data_size": 65536 00:20:39.237 }, 00:20:39.237 { 00:20:39.237 "name": "BaseBdev3", 00:20:39.237 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:39.237 "is_configured": true, 00:20:39.237 "data_offset": 0, 00:20:39.237 "data_size": 65536 00:20:39.237 } 00:20:39.237 ] 00:20:39.237 }' 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.237 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.512 [2024-12-09 23:02:14.801070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.512 "name": "Existed_Raid", 00:20:39.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.512 "strip_size_kb": 64, 00:20:39.512 "state": "configuring", 00:20:39.512 "raid_level": "concat", 00:20:39.512 "superblock": false, 00:20:39.512 "num_base_bdevs": 3, 00:20:39.512 "num_base_bdevs_discovered": 1, 00:20:39.512 "num_base_bdevs_operational": 3, 00:20:39.512 "base_bdevs_list": [ 00:20:39.512 { 00:20:39.512 "name": "BaseBdev1", 00:20:39.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.512 "is_configured": false, 00:20:39.512 "data_offset": 0, 00:20:39.512 "data_size": 0 00:20:39.512 }, 00:20:39.512 { 00:20:39.512 "name": null, 00:20:39.512 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:39.512 "is_configured": false, 00:20:39.512 "data_offset": 0, 00:20:39.512 "data_size": 65536 00:20:39.512 }, 00:20:39.512 { 00:20:39.512 "name": "BaseBdev3", 00:20:39.512 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:39.512 "is_configured": true, 00:20:39.512 "data_offset": 0, 00:20:39.512 "data_size": 65536 00:20:39.512 } 00:20:39.512 ] 00:20:39.512 }' 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.512 23:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.773 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.773 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:39.773 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.773 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.037 [2024-12-09 23:02:15.180152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.037 BaseBdev1 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.037 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.037 [ 00:20:40.037 { 00:20:40.037 "name": "BaseBdev1", 00:20:40.037 "aliases": [ 00:20:40.037 "292b28d3-430a-4c20-b7bf-30ed639f97c1" 00:20:40.037 ], 00:20:40.038 "product_name": "Malloc disk", 00:20:40.038 "block_size": 512, 00:20:40.038 "num_blocks": 65536, 00:20:40.038 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:40.038 "assigned_rate_limits": { 00:20:40.038 "rw_ios_per_sec": 0, 00:20:40.038 "rw_mbytes_per_sec": 0, 00:20:40.038 "r_mbytes_per_sec": 0, 00:20:40.038 "w_mbytes_per_sec": 0 00:20:40.038 }, 00:20:40.038 "claimed": true, 00:20:40.038 "claim_type": "exclusive_write", 00:20:40.038 "zoned": false, 00:20:40.038 "supported_io_types": { 00:20:40.038 "read": true, 00:20:40.038 "write": true, 00:20:40.038 "unmap": true, 00:20:40.038 "flush": true, 00:20:40.038 "reset": true, 00:20:40.038 "nvme_admin": false, 00:20:40.038 "nvme_io": false, 00:20:40.038 "nvme_io_md": false, 00:20:40.038 "write_zeroes": true, 00:20:40.038 "zcopy": true, 00:20:40.038 "get_zone_info": false, 00:20:40.038 "zone_management": false, 00:20:40.038 "zone_append": false, 00:20:40.038 "compare": false, 00:20:40.038 "compare_and_write": false, 00:20:40.038 "abort": true, 00:20:40.038 "seek_hole": false, 00:20:40.038 "seek_data": false, 00:20:40.038 "copy": true, 00:20:40.038 "nvme_iov_md": false 00:20:40.038 }, 00:20:40.038 "memory_domains": [ 00:20:40.038 { 00:20:40.038 "dma_device_id": "system", 00:20:40.038 "dma_device_type": 1 00:20:40.038 }, 00:20:40.038 { 00:20:40.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.038 "dma_device_type": 2 00:20:40.038 } 00:20:40.038 ], 00:20:40.038 "driver_specific": {} 00:20:40.038 } 00:20:40.038 ] 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.038 "name": "Existed_Raid", 00:20:40.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.038 "strip_size_kb": 64, 00:20:40.038 "state": "configuring", 00:20:40.038 "raid_level": "concat", 00:20:40.038 "superblock": false, 00:20:40.038 "num_base_bdevs": 3, 00:20:40.038 "num_base_bdevs_discovered": 2, 00:20:40.038 "num_base_bdevs_operational": 3, 00:20:40.038 "base_bdevs_list": [ 00:20:40.038 { 00:20:40.038 "name": "BaseBdev1", 00:20:40.038 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:40.038 "is_configured": true, 00:20:40.038 "data_offset": 0, 00:20:40.038 "data_size": 65536 00:20:40.038 }, 00:20:40.038 { 00:20:40.038 "name": null, 00:20:40.038 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:40.038 "is_configured": false, 00:20:40.038 "data_offset": 0, 00:20:40.038 "data_size": 65536 00:20:40.038 }, 00:20:40.038 { 00:20:40.038 "name": "BaseBdev3", 00:20:40.038 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:40.038 "is_configured": true, 00:20:40.038 "data_offset": 0, 00:20:40.038 "data_size": 65536 00:20:40.038 } 00:20:40.038 ] 00:20:40.038 }' 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.038 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.301 [2024-12-09 23:02:15.560295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.301 "name": "Existed_Raid", 00:20:40.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.301 "strip_size_kb": 64, 00:20:40.301 "state": "configuring", 00:20:40.301 "raid_level": "concat", 00:20:40.301 "superblock": false, 00:20:40.301 "num_base_bdevs": 3, 00:20:40.301 "num_base_bdevs_discovered": 1, 00:20:40.301 "num_base_bdevs_operational": 3, 00:20:40.301 "base_bdevs_list": [ 00:20:40.301 { 00:20:40.301 "name": "BaseBdev1", 00:20:40.301 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:40.301 "is_configured": true, 00:20:40.301 "data_offset": 0, 00:20:40.301 "data_size": 65536 00:20:40.301 }, 00:20:40.301 { 00:20:40.301 "name": null, 00:20:40.301 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:40.301 "is_configured": false, 00:20:40.301 "data_offset": 0, 00:20:40.301 "data_size": 65536 00:20:40.301 }, 00:20:40.301 { 00:20:40.301 "name": null, 00:20:40.301 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:40.301 "is_configured": false, 00:20:40.301 "data_offset": 0, 00:20:40.301 "data_size": 65536 00:20:40.301 } 00:20:40.301 ] 00:20:40.301 }' 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.301 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:40.562 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.563 [2024-12-09 23:02:15.912463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.563 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.824 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.824 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.824 "name": "Existed_Raid", 00:20:40.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.824 "strip_size_kb": 64, 00:20:40.824 "state": "configuring", 00:20:40.824 "raid_level": "concat", 00:20:40.824 "superblock": false, 00:20:40.824 "num_base_bdevs": 3, 00:20:40.824 "num_base_bdevs_discovered": 2, 00:20:40.824 "num_base_bdevs_operational": 3, 00:20:40.824 "base_bdevs_list": [ 00:20:40.824 { 00:20:40.824 "name": "BaseBdev1", 00:20:40.824 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:40.824 "is_configured": true, 00:20:40.824 "data_offset": 0, 00:20:40.824 "data_size": 65536 00:20:40.824 }, 00:20:40.824 { 00:20:40.824 "name": null, 00:20:40.824 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:40.824 "is_configured": false, 00:20:40.824 "data_offset": 0, 00:20:40.824 "data_size": 65536 00:20:40.824 }, 00:20:40.824 { 00:20:40.824 "name": "BaseBdev3", 00:20:40.824 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:40.824 "is_configured": true, 00:20:40.824 "data_offset": 0, 00:20:40.824 "data_size": 65536 00:20:40.824 } 00:20:40.824 ] 00:20:40.824 }' 00:20:40.824 23:02:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.824 23:02:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 [2024-12-09 23:02:16.280571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.085 "name": "Existed_Raid", 00:20:41.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.085 "strip_size_kb": 64, 00:20:41.085 "state": "configuring", 00:20:41.085 "raid_level": "concat", 00:20:41.085 "superblock": false, 00:20:41.085 "num_base_bdevs": 3, 00:20:41.085 "num_base_bdevs_discovered": 1, 00:20:41.085 "num_base_bdevs_operational": 3, 00:20:41.085 "base_bdevs_list": [ 00:20:41.085 { 00:20:41.085 "name": null, 00:20:41.085 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:41.085 "is_configured": false, 00:20:41.085 "data_offset": 0, 00:20:41.085 "data_size": 65536 00:20:41.085 }, 00:20:41.085 { 00:20:41.085 "name": null, 00:20:41.085 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:41.085 "is_configured": false, 00:20:41.085 "data_offset": 0, 00:20:41.085 "data_size": 65536 00:20:41.085 }, 00:20:41.085 { 00:20:41.085 "name": "BaseBdev3", 00:20:41.085 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:41.085 "is_configured": true, 00:20:41.085 "data_offset": 0, 00:20:41.085 "data_size": 65536 00:20:41.085 } 00:20:41.085 ] 00:20:41.085 }' 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.085 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.346 [2024-12-09 23:02:16.695082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.346 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.607 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.607 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.607 "name": "Existed_Raid", 00:20:41.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.607 "strip_size_kb": 64, 00:20:41.607 "state": "configuring", 00:20:41.607 "raid_level": "concat", 00:20:41.607 "superblock": false, 00:20:41.607 "num_base_bdevs": 3, 00:20:41.607 "num_base_bdevs_discovered": 2, 00:20:41.607 "num_base_bdevs_operational": 3, 00:20:41.607 "base_bdevs_list": [ 00:20:41.607 { 00:20:41.607 "name": null, 00:20:41.607 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:41.607 "is_configured": false, 00:20:41.607 "data_offset": 0, 00:20:41.607 "data_size": 65536 00:20:41.607 }, 00:20:41.607 { 00:20:41.607 "name": "BaseBdev2", 00:20:41.607 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:41.607 "is_configured": true, 00:20:41.607 "data_offset": 0, 00:20:41.607 "data_size": 65536 00:20:41.607 }, 00:20:41.607 { 00:20:41.607 "name": "BaseBdev3", 00:20:41.607 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:41.607 "is_configured": true, 00:20:41.607 "data_offset": 0, 00:20:41.607 "data_size": 65536 00:20:41.607 } 00:20:41.607 ] 00:20:41.607 }' 00:20:41.607 23:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.607 23:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.866 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 292b28d3-430a-4c20-b7bf-30ed639f97c1 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.867 NewBaseBdev 00:20:41.867 [2024-12-09 23:02:17.127318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:41.867 [2024-12-09 23:02:17.127367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:41.867 [2024-12-09 23:02:17.127377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:41.867 [2024-12-09 23:02:17.127666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:41.867 [2024-12-09 23:02:17.127820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:41.867 [2024-12-09 23:02:17.127828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:41.867 [2024-12-09 23:02:17.128131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.867 [ 00:20:41.867 { 00:20:41.867 "name": "NewBaseBdev", 00:20:41.867 "aliases": [ 00:20:41.867 "292b28d3-430a-4c20-b7bf-30ed639f97c1" 00:20:41.867 ], 00:20:41.867 "product_name": "Malloc disk", 00:20:41.867 "block_size": 512, 00:20:41.867 "num_blocks": 65536, 00:20:41.867 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:41.867 "assigned_rate_limits": { 00:20:41.867 "rw_ios_per_sec": 0, 00:20:41.867 "rw_mbytes_per_sec": 0, 00:20:41.867 "r_mbytes_per_sec": 0, 00:20:41.867 "w_mbytes_per_sec": 0 00:20:41.867 }, 00:20:41.867 "claimed": true, 00:20:41.867 "claim_type": "exclusive_write", 00:20:41.867 "zoned": false, 00:20:41.867 "supported_io_types": { 00:20:41.867 "read": true, 00:20:41.867 "write": true, 00:20:41.867 "unmap": true, 00:20:41.867 "flush": true, 00:20:41.867 "reset": true, 00:20:41.867 "nvme_admin": false, 00:20:41.867 "nvme_io": false, 00:20:41.867 "nvme_io_md": false, 00:20:41.867 "write_zeroes": true, 00:20:41.867 "zcopy": true, 00:20:41.867 "get_zone_info": false, 00:20:41.867 "zone_management": false, 00:20:41.867 "zone_append": false, 00:20:41.867 "compare": false, 00:20:41.867 "compare_and_write": false, 00:20:41.867 "abort": true, 00:20:41.867 "seek_hole": false, 00:20:41.867 "seek_data": false, 00:20:41.867 "copy": true, 00:20:41.867 "nvme_iov_md": false 00:20:41.867 }, 00:20:41.867 "memory_domains": [ 00:20:41.867 { 00:20:41.867 "dma_device_id": "system", 00:20:41.867 "dma_device_type": 1 00:20:41.867 }, 00:20:41.867 { 00:20:41.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.867 "dma_device_type": 2 00:20:41.867 } 00:20:41.867 ], 00:20:41.867 "driver_specific": {} 00:20:41.867 } 00:20:41.867 ] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.867 "name": "Existed_Raid", 00:20:41.867 "uuid": "4d9da36b-c90e-42c1-b7ce-dec0130b5fcd", 00:20:41.867 "strip_size_kb": 64, 00:20:41.867 "state": "online", 00:20:41.867 "raid_level": "concat", 00:20:41.867 "superblock": false, 00:20:41.867 "num_base_bdevs": 3, 00:20:41.867 "num_base_bdevs_discovered": 3, 00:20:41.867 "num_base_bdevs_operational": 3, 00:20:41.867 "base_bdevs_list": [ 00:20:41.867 { 00:20:41.867 "name": "NewBaseBdev", 00:20:41.867 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:41.867 "is_configured": true, 00:20:41.867 "data_offset": 0, 00:20:41.867 "data_size": 65536 00:20:41.867 }, 00:20:41.867 { 00:20:41.867 "name": "BaseBdev2", 00:20:41.867 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:41.867 "is_configured": true, 00:20:41.867 "data_offset": 0, 00:20:41.867 "data_size": 65536 00:20:41.867 }, 00:20:41.867 { 00:20:41.867 "name": "BaseBdev3", 00:20:41.867 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:41.867 "is_configured": true, 00:20:41.867 "data_offset": 0, 00:20:41.867 "data_size": 65536 00:20:41.867 } 00:20:41.867 ] 00:20:41.867 }' 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.867 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:42.127 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.388 [2024-12-09 23:02:17.487832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.388 "name": "Existed_Raid", 00:20:42.388 "aliases": [ 00:20:42.388 "4d9da36b-c90e-42c1-b7ce-dec0130b5fcd" 00:20:42.388 ], 00:20:42.388 "product_name": "Raid Volume", 00:20:42.388 "block_size": 512, 00:20:42.388 "num_blocks": 196608, 00:20:42.388 "uuid": "4d9da36b-c90e-42c1-b7ce-dec0130b5fcd", 00:20:42.388 "assigned_rate_limits": { 00:20:42.388 "rw_ios_per_sec": 0, 00:20:42.388 "rw_mbytes_per_sec": 0, 00:20:42.388 "r_mbytes_per_sec": 0, 00:20:42.388 "w_mbytes_per_sec": 0 00:20:42.388 }, 00:20:42.388 "claimed": false, 00:20:42.388 "zoned": false, 00:20:42.388 "supported_io_types": { 00:20:42.388 "read": true, 00:20:42.388 "write": true, 00:20:42.388 "unmap": true, 00:20:42.388 "flush": true, 00:20:42.388 "reset": true, 00:20:42.388 "nvme_admin": false, 00:20:42.388 "nvme_io": false, 00:20:42.388 "nvme_io_md": false, 00:20:42.388 "write_zeroes": true, 00:20:42.388 "zcopy": false, 00:20:42.388 "get_zone_info": false, 00:20:42.388 "zone_management": false, 00:20:42.388 "zone_append": false, 00:20:42.388 "compare": false, 00:20:42.388 "compare_and_write": false, 00:20:42.388 "abort": false, 00:20:42.388 "seek_hole": false, 00:20:42.388 "seek_data": false, 00:20:42.388 "copy": false, 00:20:42.388 "nvme_iov_md": false 00:20:42.388 }, 00:20:42.388 "memory_domains": [ 00:20:42.388 { 00:20:42.388 "dma_device_id": "system", 00:20:42.388 "dma_device_type": 1 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.388 "dma_device_type": 2 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "dma_device_id": "system", 00:20:42.388 "dma_device_type": 1 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.388 "dma_device_type": 2 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "dma_device_id": "system", 00:20:42.388 "dma_device_type": 1 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.388 "dma_device_type": 2 00:20:42.388 } 00:20:42.388 ], 00:20:42.388 "driver_specific": { 00:20:42.388 "raid": { 00:20:42.388 "uuid": "4d9da36b-c90e-42c1-b7ce-dec0130b5fcd", 00:20:42.388 "strip_size_kb": 64, 00:20:42.388 "state": "online", 00:20:42.388 "raid_level": "concat", 00:20:42.388 "superblock": false, 00:20:42.388 "num_base_bdevs": 3, 00:20:42.388 "num_base_bdevs_discovered": 3, 00:20:42.388 "num_base_bdevs_operational": 3, 00:20:42.388 "base_bdevs_list": [ 00:20:42.388 { 00:20:42.388 "name": "NewBaseBdev", 00:20:42.388 "uuid": "292b28d3-430a-4c20-b7bf-30ed639f97c1", 00:20:42.388 "is_configured": true, 00:20:42.388 "data_offset": 0, 00:20:42.388 "data_size": 65536 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "name": "BaseBdev2", 00:20:42.388 "uuid": "72813997-8608-4363-8841-f3c730184ff7", 00:20:42.388 "is_configured": true, 00:20:42.388 "data_offset": 0, 00:20:42.388 "data_size": 65536 00:20:42.388 }, 00:20:42.388 { 00:20:42.388 "name": "BaseBdev3", 00:20:42.388 "uuid": "d3677b20-45dd-4258-ba6e-3d1552a19441", 00:20:42.388 "is_configured": true, 00:20:42.388 "data_offset": 0, 00:20:42.388 "data_size": 65536 00:20:42.388 } 00:20:42.388 ] 00:20:42.388 } 00:20:42.388 } 00:20:42.388 }' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:42.388 BaseBdev2 00:20:42.388 BaseBdev3' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.388 [2024-12-09 23:02:17.691548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.388 [2024-12-09 23:02:17.691733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.388 [2024-12-09 23:02:17.692646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.388 [2024-12-09 23:02:17.692853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.388 [2024-12-09 23:02:17.692898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64043 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64043 ']' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64043 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64043 00:20:42.388 killing process with pid 64043 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64043' 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64043 00:20:42.388 23:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64043 00:20:42.388 [2024-12-09 23:02:17.728322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.649 [2024-12-09 23:02:17.948248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.591 ************************************ 00:20:43.591 END TEST raid_state_function_test 00:20:43.591 ************************************ 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:43.591 00:20:43.591 real 0m8.105s 00:20:43.591 user 0m12.529s 00:20:43.591 sys 0m1.484s 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.591 23:02:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:43.591 23:02:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:43.591 23:02:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.591 23:02:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.591 ************************************ 00:20:43.591 START TEST raid_state_function_test_sb 00:20:43.591 ************************************ 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:43.591 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:43.591 Process raid pid: 64637 00:20:43.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64637 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64637' 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64637 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64637 ']' 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.592 23:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.592 [2024-12-09 23:02:18.945361] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:43.592 [2024-12-09 23:02:18.945508] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.852 [2024-12-09 23:02:19.114872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.112 [2024-12-09 23:02:19.275435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.112 [2024-12-09 23:02:19.458190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.112 [2024-12-09 23:02:19.458250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.726 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.726 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:44.726 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:44.726 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.726 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.726 [2024-12-09 23:02:19.855317] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:44.726 [2024-12-09 23:02:19.855591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:44.726 [2024-12-09 23:02:19.855675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:44.726 [2024-12-09 23:02:19.855706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:44.726 [2024-12-09 23:02:19.855715] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:44.727 [2024-12-09 23:02:19.855725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.727 "name": "Existed_Raid", 00:20:44.727 "uuid": "1801fe11-2168-4818-9464-77dbf2633b02", 00:20:44.727 "strip_size_kb": 64, 00:20:44.727 "state": "configuring", 00:20:44.727 "raid_level": "concat", 00:20:44.727 "superblock": true, 00:20:44.727 "num_base_bdevs": 3, 00:20:44.727 "num_base_bdevs_discovered": 0, 00:20:44.727 "num_base_bdevs_operational": 3, 00:20:44.727 "base_bdevs_list": [ 00:20:44.727 { 00:20:44.727 "name": "BaseBdev1", 00:20:44.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.727 "is_configured": false, 00:20:44.727 "data_offset": 0, 00:20:44.727 "data_size": 0 00:20:44.727 }, 00:20:44.727 { 00:20:44.727 "name": "BaseBdev2", 00:20:44.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.727 "is_configured": false, 00:20:44.727 "data_offset": 0, 00:20:44.727 "data_size": 0 00:20:44.727 }, 00:20:44.727 { 00:20:44.727 "name": "BaseBdev3", 00:20:44.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.727 "is_configured": false, 00:20:44.727 "data_offset": 0, 00:20:44.727 "data_size": 0 00:20:44.727 } 00:20:44.727 ] 00:20:44.727 }' 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.727 23:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.988 [2024-12-09 23:02:20.171295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:44.988 [2024-12-09 23:02:20.171338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.988 [2024-12-09 23:02:20.179316] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:44.988 [2024-12-09 23:02:20.179495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:44.988 [2024-12-09 23:02:20.179562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:44.988 [2024-12-09 23:02:20.179590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:44.988 [2024-12-09 23:02:20.179608] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:44.988 [2024-12-09 23:02:20.179629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.988 [2024-12-09 23:02:20.214896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.988 BaseBdev1 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.988 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.988 [ 00:20:44.988 { 00:20:44.988 "name": "BaseBdev1", 00:20:44.988 "aliases": [ 00:20:44.988 "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87" 00:20:44.988 ], 00:20:44.988 "product_name": "Malloc disk", 00:20:44.988 "block_size": 512, 00:20:44.988 "num_blocks": 65536, 00:20:44.988 "uuid": "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87", 00:20:44.988 "assigned_rate_limits": { 00:20:44.988 "rw_ios_per_sec": 0, 00:20:44.988 "rw_mbytes_per_sec": 0, 00:20:44.989 "r_mbytes_per_sec": 0, 00:20:44.989 "w_mbytes_per_sec": 0 00:20:44.989 }, 00:20:44.989 "claimed": true, 00:20:44.989 "claim_type": "exclusive_write", 00:20:44.989 "zoned": false, 00:20:44.989 "supported_io_types": { 00:20:44.989 "read": true, 00:20:44.989 "write": true, 00:20:44.989 "unmap": true, 00:20:44.989 "flush": true, 00:20:44.989 "reset": true, 00:20:44.989 "nvme_admin": false, 00:20:44.989 "nvme_io": false, 00:20:44.989 "nvme_io_md": false, 00:20:44.989 "write_zeroes": true, 00:20:44.989 "zcopy": true, 00:20:44.989 "get_zone_info": false, 00:20:44.989 "zone_management": false, 00:20:44.989 "zone_append": false, 00:20:44.989 "compare": false, 00:20:44.989 "compare_and_write": false, 00:20:44.989 "abort": true, 00:20:44.989 "seek_hole": false, 00:20:44.989 "seek_data": false, 00:20:44.989 "copy": true, 00:20:44.989 "nvme_iov_md": false 00:20:44.989 }, 00:20:44.989 "memory_domains": [ 00:20:44.989 { 00:20:44.989 "dma_device_id": "system", 00:20:44.989 "dma_device_type": 1 00:20:44.989 }, 00:20:44.989 { 00:20:44.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.989 "dma_device_type": 2 00:20:44.989 } 00:20:44.989 ], 00:20:44.989 "driver_specific": {} 00:20:44.989 } 00:20:44.989 ] 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.989 "name": "Existed_Raid", 00:20:44.989 "uuid": "2050b220-333b-43de-adf2-1d7f0ddf8d03", 00:20:44.989 "strip_size_kb": 64, 00:20:44.989 "state": "configuring", 00:20:44.989 "raid_level": "concat", 00:20:44.989 "superblock": true, 00:20:44.989 "num_base_bdevs": 3, 00:20:44.989 "num_base_bdevs_discovered": 1, 00:20:44.989 "num_base_bdevs_operational": 3, 00:20:44.989 "base_bdevs_list": [ 00:20:44.989 { 00:20:44.989 "name": "BaseBdev1", 00:20:44.989 "uuid": "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87", 00:20:44.989 "is_configured": true, 00:20:44.989 "data_offset": 2048, 00:20:44.989 "data_size": 63488 00:20:44.989 }, 00:20:44.989 { 00:20:44.989 "name": "BaseBdev2", 00:20:44.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.989 "is_configured": false, 00:20:44.989 "data_offset": 0, 00:20:44.989 "data_size": 0 00:20:44.989 }, 00:20:44.989 { 00:20:44.989 "name": "BaseBdev3", 00:20:44.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.989 "is_configured": false, 00:20:44.989 "data_offset": 0, 00:20:44.989 "data_size": 0 00:20:44.989 } 00:20:44.989 ] 00:20:44.989 }' 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.989 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.249 [2024-12-09 23:02:20.579031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:45.249 [2024-12-09 23:02:20.579279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.249 [2024-12-09 23:02:20.587124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.249 [2024-12-09 23:02:20.589300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:45.249 [2024-12-09 23:02:20.589502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:45.249 [2024-12-09 23:02:20.589523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:45.249 [2024-12-09 23:02:20.589534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.249 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.507 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.507 "name": "Existed_Raid", 00:20:45.507 "uuid": "06b8a652-dd89-4177-bb95-847ad6d2e9c2", 00:20:45.507 "strip_size_kb": 64, 00:20:45.507 "state": "configuring", 00:20:45.507 "raid_level": "concat", 00:20:45.507 "superblock": true, 00:20:45.507 "num_base_bdevs": 3, 00:20:45.507 "num_base_bdevs_discovered": 1, 00:20:45.507 "num_base_bdevs_operational": 3, 00:20:45.507 "base_bdevs_list": [ 00:20:45.507 { 00:20:45.507 "name": "BaseBdev1", 00:20:45.507 "uuid": "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87", 00:20:45.507 "is_configured": true, 00:20:45.507 "data_offset": 2048, 00:20:45.507 "data_size": 63488 00:20:45.507 }, 00:20:45.507 { 00:20:45.507 "name": "BaseBdev2", 00:20:45.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.507 "is_configured": false, 00:20:45.507 "data_offset": 0, 00:20:45.507 "data_size": 0 00:20:45.507 }, 00:20:45.507 { 00:20:45.507 "name": "BaseBdev3", 00:20:45.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.507 "is_configured": false, 00:20:45.507 "data_offset": 0, 00:20:45.507 "data_size": 0 00:20:45.507 } 00:20:45.507 ] 00:20:45.507 }' 00:20:45.507 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.507 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.767 BaseBdev2 00:20:45.767 [2024-12-09 23:02:20.947692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.767 [ 00:20:45.767 { 00:20:45.767 "name": "BaseBdev2", 00:20:45.767 "aliases": [ 00:20:45.767 "9dc66ea3-8d7f-4807-b84b-e0496e8a3848" 00:20:45.767 ], 00:20:45.767 "product_name": "Malloc disk", 00:20:45.767 "block_size": 512, 00:20:45.767 "num_blocks": 65536, 00:20:45.767 "uuid": "9dc66ea3-8d7f-4807-b84b-e0496e8a3848", 00:20:45.767 "assigned_rate_limits": { 00:20:45.767 "rw_ios_per_sec": 0, 00:20:45.767 "rw_mbytes_per_sec": 0, 00:20:45.767 "r_mbytes_per_sec": 0, 00:20:45.767 "w_mbytes_per_sec": 0 00:20:45.767 }, 00:20:45.767 "claimed": true, 00:20:45.767 "claim_type": "exclusive_write", 00:20:45.767 "zoned": false, 00:20:45.767 "supported_io_types": { 00:20:45.767 "read": true, 00:20:45.767 "write": true, 00:20:45.767 "unmap": true, 00:20:45.767 "flush": true, 00:20:45.767 "reset": true, 00:20:45.767 "nvme_admin": false, 00:20:45.767 "nvme_io": false, 00:20:45.767 "nvme_io_md": false, 00:20:45.767 "write_zeroes": true, 00:20:45.767 "zcopy": true, 00:20:45.767 "get_zone_info": false, 00:20:45.767 "zone_management": false, 00:20:45.767 "zone_append": false, 00:20:45.767 "compare": false, 00:20:45.767 "compare_and_write": false, 00:20:45.767 "abort": true, 00:20:45.767 "seek_hole": false, 00:20:45.767 "seek_data": false, 00:20:45.767 "copy": true, 00:20:45.767 "nvme_iov_md": false 00:20:45.767 }, 00:20:45.767 "memory_domains": [ 00:20:45.767 { 00:20:45.767 "dma_device_id": "system", 00:20:45.767 "dma_device_type": 1 00:20:45.767 }, 00:20:45.767 { 00:20:45.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.767 "dma_device_type": 2 00:20:45.767 } 00:20:45.767 ], 00:20:45.767 "driver_specific": {} 00:20:45.767 } 00:20:45.767 ] 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.767 23:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.767 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.767 "name": "Existed_Raid", 00:20:45.767 "uuid": "06b8a652-dd89-4177-bb95-847ad6d2e9c2", 00:20:45.767 "strip_size_kb": 64, 00:20:45.767 "state": "configuring", 00:20:45.767 "raid_level": "concat", 00:20:45.767 "superblock": true, 00:20:45.767 "num_base_bdevs": 3, 00:20:45.767 "num_base_bdevs_discovered": 2, 00:20:45.767 "num_base_bdevs_operational": 3, 00:20:45.767 "base_bdevs_list": [ 00:20:45.767 { 00:20:45.767 "name": "BaseBdev1", 00:20:45.767 "uuid": "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87", 00:20:45.767 "is_configured": true, 00:20:45.767 "data_offset": 2048, 00:20:45.767 "data_size": 63488 00:20:45.767 }, 00:20:45.767 { 00:20:45.767 "name": "BaseBdev2", 00:20:45.767 "uuid": "9dc66ea3-8d7f-4807-b84b-e0496e8a3848", 00:20:45.767 "is_configured": true, 00:20:45.767 "data_offset": 2048, 00:20:45.767 "data_size": 63488 00:20:45.767 }, 00:20:45.767 { 00:20:45.767 "name": "BaseBdev3", 00:20:45.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.767 "is_configured": false, 00:20:45.767 "data_offset": 0, 00:20:45.767 "data_size": 0 00:20:45.767 } 00:20:45.767 ] 00:20:45.767 }' 00:20:45.767 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.767 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.029 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:46.029 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.029 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.029 BaseBdev3 00:20:46.029 [2024-12-09 23:02:21.349386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.029 [2024-12-09 23:02:21.349678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:46.029 [2024-12-09 23:02:21.349703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:46.029 [2024-12-09 23:02:21.350193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:46.029 [2024-12-09 23:02:21.350374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:46.029 [2024-12-09 23:02:21.350385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:46.029 [2024-12-09 23:02:21.350544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.029 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.029 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:46.029 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.030 [ 00:20:46.030 { 00:20:46.030 "name": "BaseBdev3", 00:20:46.030 "aliases": [ 00:20:46.030 "b1e663f3-54b2-4dc3-bba1-0956747da133" 00:20:46.030 ], 00:20:46.030 "product_name": "Malloc disk", 00:20:46.030 "block_size": 512, 00:20:46.030 "num_blocks": 65536, 00:20:46.030 "uuid": "b1e663f3-54b2-4dc3-bba1-0956747da133", 00:20:46.030 "assigned_rate_limits": { 00:20:46.030 "rw_ios_per_sec": 0, 00:20:46.030 "rw_mbytes_per_sec": 0, 00:20:46.030 "r_mbytes_per_sec": 0, 00:20:46.030 "w_mbytes_per_sec": 0 00:20:46.030 }, 00:20:46.030 "claimed": true, 00:20:46.030 "claim_type": "exclusive_write", 00:20:46.030 "zoned": false, 00:20:46.030 "supported_io_types": { 00:20:46.030 "read": true, 00:20:46.030 "write": true, 00:20:46.030 "unmap": true, 00:20:46.030 "flush": true, 00:20:46.030 "reset": true, 00:20:46.030 "nvme_admin": false, 00:20:46.030 "nvme_io": false, 00:20:46.030 "nvme_io_md": false, 00:20:46.030 "write_zeroes": true, 00:20:46.030 "zcopy": true, 00:20:46.030 "get_zone_info": false, 00:20:46.030 "zone_management": false, 00:20:46.030 "zone_append": false, 00:20:46.030 "compare": false, 00:20:46.030 "compare_and_write": false, 00:20:46.030 "abort": true, 00:20:46.030 "seek_hole": false, 00:20:46.030 "seek_data": false, 00:20:46.030 "copy": true, 00:20:46.030 "nvme_iov_md": false 00:20:46.030 }, 00:20:46.030 "memory_domains": [ 00:20:46.030 { 00:20:46.030 "dma_device_id": "system", 00:20:46.030 "dma_device_type": 1 00:20:46.030 }, 00:20:46.030 { 00:20:46.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.030 "dma_device_type": 2 00:20:46.030 } 00:20:46.030 ], 00:20:46.030 "driver_specific": {} 00:20:46.030 } 00:20:46.030 ] 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.030 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.292 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.292 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.292 "name": "Existed_Raid", 00:20:46.292 "uuid": "06b8a652-dd89-4177-bb95-847ad6d2e9c2", 00:20:46.292 "strip_size_kb": 64, 00:20:46.292 "state": "online", 00:20:46.292 "raid_level": "concat", 00:20:46.292 "superblock": true, 00:20:46.292 "num_base_bdevs": 3, 00:20:46.292 "num_base_bdevs_discovered": 3, 00:20:46.292 "num_base_bdevs_operational": 3, 00:20:46.292 "base_bdevs_list": [ 00:20:46.292 { 00:20:46.292 "name": "BaseBdev1", 00:20:46.292 "uuid": "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87", 00:20:46.292 "is_configured": true, 00:20:46.292 "data_offset": 2048, 00:20:46.292 "data_size": 63488 00:20:46.292 }, 00:20:46.292 { 00:20:46.292 "name": "BaseBdev2", 00:20:46.292 "uuid": "9dc66ea3-8d7f-4807-b84b-e0496e8a3848", 00:20:46.292 "is_configured": true, 00:20:46.292 "data_offset": 2048, 00:20:46.292 "data_size": 63488 00:20:46.292 }, 00:20:46.292 { 00:20:46.292 "name": "BaseBdev3", 00:20:46.292 "uuid": "b1e663f3-54b2-4dc3-bba1-0956747da133", 00:20:46.292 "is_configured": true, 00:20:46.292 "data_offset": 2048, 00:20:46.292 "data_size": 63488 00:20:46.292 } 00:20:46.292 ] 00:20:46.292 }' 00:20:46.292 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.292 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.555 [2024-12-09 23:02:21.729912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.555 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:46.555 "name": "Existed_Raid", 00:20:46.555 "aliases": [ 00:20:46.555 "06b8a652-dd89-4177-bb95-847ad6d2e9c2" 00:20:46.555 ], 00:20:46.555 "product_name": "Raid Volume", 00:20:46.555 "block_size": 512, 00:20:46.555 "num_blocks": 190464, 00:20:46.555 "uuid": "06b8a652-dd89-4177-bb95-847ad6d2e9c2", 00:20:46.555 "assigned_rate_limits": { 00:20:46.555 "rw_ios_per_sec": 0, 00:20:46.555 "rw_mbytes_per_sec": 0, 00:20:46.555 "r_mbytes_per_sec": 0, 00:20:46.555 "w_mbytes_per_sec": 0 00:20:46.555 }, 00:20:46.555 "claimed": false, 00:20:46.555 "zoned": false, 00:20:46.555 "supported_io_types": { 00:20:46.555 "read": true, 00:20:46.555 "write": true, 00:20:46.555 "unmap": true, 00:20:46.555 "flush": true, 00:20:46.555 "reset": true, 00:20:46.555 "nvme_admin": false, 00:20:46.555 "nvme_io": false, 00:20:46.555 "nvme_io_md": false, 00:20:46.555 "write_zeroes": true, 00:20:46.555 "zcopy": false, 00:20:46.555 "get_zone_info": false, 00:20:46.555 "zone_management": false, 00:20:46.555 "zone_append": false, 00:20:46.555 "compare": false, 00:20:46.555 "compare_and_write": false, 00:20:46.555 "abort": false, 00:20:46.555 "seek_hole": false, 00:20:46.555 "seek_data": false, 00:20:46.555 "copy": false, 00:20:46.555 "nvme_iov_md": false 00:20:46.555 }, 00:20:46.555 "memory_domains": [ 00:20:46.555 { 00:20:46.555 "dma_device_id": "system", 00:20:46.555 "dma_device_type": 1 00:20:46.555 }, 00:20:46.555 { 00:20:46.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.555 "dma_device_type": 2 00:20:46.555 }, 00:20:46.555 { 00:20:46.555 "dma_device_id": "system", 00:20:46.555 "dma_device_type": 1 00:20:46.555 }, 00:20:46.555 { 00:20:46.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.555 "dma_device_type": 2 00:20:46.555 }, 00:20:46.555 { 00:20:46.555 "dma_device_id": "system", 00:20:46.555 "dma_device_type": 1 00:20:46.555 }, 00:20:46.555 { 00:20:46.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.555 "dma_device_type": 2 00:20:46.555 } 00:20:46.555 ], 00:20:46.555 "driver_specific": { 00:20:46.555 "raid": { 00:20:46.555 "uuid": "06b8a652-dd89-4177-bb95-847ad6d2e9c2", 00:20:46.555 "strip_size_kb": 64, 00:20:46.555 "state": "online", 00:20:46.555 "raid_level": "concat", 00:20:46.555 "superblock": true, 00:20:46.555 "num_base_bdevs": 3, 00:20:46.555 "num_base_bdevs_discovered": 3, 00:20:46.555 "num_base_bdevs_operational": 3, 00:20:46.555 "base_bdevs_list": [ 00:20:46.555 { 00:20:46.555 "name": "BaseBdev1", 00:20:46.555 "uuid": "9ad94b6d-35a9-45a6-be42-e1caa1f2dd87", 00:20:46.555 "is_configured": true, 00:20:46.555 "data_offset": 2048, 00:20:46.556 "data_size": 63488 00:20:46.556 }, 00:20:46.556 { 00:20:46.556 "name": "BaseBdev2", 00:20:46.556 "uuid": "9dc66ea3-8d7f-4807-b84b-e0496e8a3848", 00:20:46.556 "is_configured": true, 00:20:46.556 "data_offset": 2048, 00:20:46.556 "data_size": 63488 00:20:46.556 }, 00:20:46.556 { 00:20:46.556 "name": "BaseBdev3", 00:20:46.556 "uuid": "b1e663f3-54b2-4dc3-bba1-0956747da133", 00:20:46.556 "is_configured": true, 00:20:46.556 "data_offset": 2048, 00:20:46.556 "data_size": 63488 00:20:46.556 } 00:20:46.556 ] 00:20:46.556 } 00:20:46.556 } 00:20:46.556 }' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:46.556 BaseBdev2 00:20:46.556 BaseBdev3' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.556 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.556 [2024-12-09 23:02:21.913659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:46.817 [2024-12-09 23:02:21.913863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.817 [2024-12-09 23:02:21.913954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.817 23:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.817 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.817 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.817 "name": "Existed_Raid", 00:20:46.817 "uuid": "06b8a652-dd89-4177-bb95-847ad6d2e9c2", 00:20:46.818 "strip_size_kb": 64, 00:20:46.818 "state": "offline", 00:20:46.818 "raid_level": "concat", 00:20:46.818 "superblock": true, 00:20:46.818 "num_base_bdevs": 3, 00:20:46.818 "num_base_bdevs_discovered": 2, 00:20:46.818 "num_base_bdevs_operational": 2, 00:20:46.818 "base_bdevs_list": [ 00:20:46.818 { 00:20:46.818 "name": null, 00:20:46.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.818 "is_configured": false, 00:20:46.818 "data_offset": 0, 00:20:46.818 "data_size": 63488 00:20:46.818 }, 00:20:46.818 { 00:20:46.818 "name": "BaseBdev2", 00:20:46.818 "uuid": "9dc66ea3-8d7f-4807-b84b-e0496e8a3848", 00:20:46.818 "is_configured": true, 00:20:46.818 "data_offset": 2048, 00:20:46.818 "data_size": 63488 00:20:46.818 }, 00:20:46.818 { 00:20:46.818 "name": "BaseBdev3", 00:20:46.818 "uuid": "b1e663f3-54b2-4dc3-bba1-0956747da133", 00:20:46.818 "is_configured": true, 00:20:46.818 "data_offset": 2048, 00:20:46.818 "data_size": 63488 00:20:46.818 } 00:20:46.818 ] 00:20:46.818 }' 00:20:46.818 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.818 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.079 [2024-12-09 23:02:22.346537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.079 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.348 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:47.348 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:47.348 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:47.348 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 [2024-12-09 23:02:22.457235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:47.349 [2024-12-09 23:02:22.457457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 BaseBdev2 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 [ 00:20:47.349 { 00:20:47.349 "name": "BaseBdev2", 00:20:47.349 "aliases": [ 00:20:47.349 "ede28e0c-2591-48c3-8415-69e6353b9774" 00:20:47.349 ], 00:20:47.349 "product_name": "Malloc disk", 00:20:47.349 "block_size": 512, 00:20:47.349 "num_blocks": 65536, 00:20:47.349 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:47.349 "assigned_rate_limits": { 00:20:47.349 "rw_ios_per_sec": 0, 00:20:47.349 "rw_mbytes_per_sec": 0, 00:20:47.349 "r_mbytes_per_sec": 0, 00:20:47.349 "w_mbytes_per_sec": 0 00:20:47.349 }, 00:20:47.349 "claimed": false, 00:20:47.349 "zoned": false, 00:20:47.349 "supported_io_types": { 00:20:47.349 "read": true, 00:20:47.349 "write": true, 00:20:47.349 "unmap": true, 00:20:47.349 "flush": true, 00:20:47.349 "reset": true, 00:20:47.349 "nvme_admin": false, 00:20:47.349 "nvme_io": false, 00:20:47.349 "nvme_io_md": false, 00:20:47.349 "write_zeroes": true, 00:20:47.349 "zcopy": true, 00:20:47.349 "get_zone_info": false, 00:20:47.349 "zone_management": false, 00:20:47.349 "zone_append": false, 00:20:47.349 "compare": false, 00:20:47.349 "compare_and_write": false, 00:20:47.349 "abort": true, 00:20:47.349 "seek_hole": false, 00:20:47.349 "seek_data": false, 00:20:47.349 "copy": true, 00:20:47.349 "nvme_iov_md": false 00:20:47.349 }, 00:20:47.349 "memory_domains": [ 00:20:47.349 { 00:20:47.349 "dma_device_id": "system", 00:20:47.349 "dma_device_type": 1 00:20:47.349 }, 00:20:47.349 { 00:20:47.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.349 "dma_device_type": 2 00:20:47.349 } 00:20:47.349 ], 00:20:47.349 "driver_specific": {} 00:20:47.349 } 00:20:47.349 ] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 BaseBdev3 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 [ 00:20:47.349 { 00:20:47.349 "name": "BaseBdev3", 00:20:47.349 "aliases": [ 00:20:47.349 "dc4b0524-9d96-4414-b728-dbcf3df4990d" 00:20:47.349 ], 00:20:47.349 "product_name": "Malloc disk", 00:20:47.349 "block_size": 512, 00:20:47.349 "num_blocks": 65536, 00:20:47.349 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:47.349 "assigned_rate_limits": { 00:20:47.349 "rw_ios_per_sec": 0, 00:20:47.349 "rw_mbytes_per_sec": 0, 00:20:47.349 "r_mbytes_per_sec": 0, 00:20:47.349 "w_mbytes_per_sec": 0 00:20:47.349 }, 00:20:47.349 "claimed": false, 00:20:47.349 "zoned": false, 00:20:47.349 "supported_io_types": { 00:20:47.349 "read": true, 00:20:47.349 "write": true, 00:20:47.349 "unmap": true, 00:20:47.349 "flush": true, 00:20:47.349 "reset": true, 00:20:47.349 "nvme_admin": false, 00:20:47.349 "nvme_io": false, 00:20:47.349 "nvme_io_md": false, 00:20:47.349 "write_zeroes": true, 00:20:47.349 "zcopy": true, 00:20:47.349 "get_zone_info": false, 00:20:47.349 "zone_management": false, 00:20:47.349 "zone_append": false, 00:20:47.349 "compare": false, 00:20:47.349 "compare_and_write": false, 00:20:47.349 "abort": true, 00:20:47.349 "seek_hole": false, 00:20:47.349 "seek_data": false, 00:20:47.349 "copy": true, 00:20:47.349 "nvme_iov_md": false 00:20:47.349 }, 00:20:47.349 "memory_domains": [ 00:20:47.349 { 00:20:47.349 "dma_device_id": "system", 00:20:47.349 "dma_device_type": 1 00:20:47.349 }, 00:20:47.349 { 00:20:47.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.349 "dma_device_type": 2 00:20:47.349 } 00:20:47.349 ], 00:20:47.349 "driver_specific": {} 00:20:47.349 } 00:20:47.349 ] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.349 [2024-12-09 23:02:22.688925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:47.349 [2024-12-09 23:02:22.689158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:47.349 [2024-12-09 23:02:22.689256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.349 [2024-12-09 23:02:22.691476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.349 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.611 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.611 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.611 "name": "Existed_Raid", 00:20:47.611 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:47.611 "strip_size_kb": 64, 00:20:47.611 "state": "configuring", 00:20:47.611 "raid_level": "concat", 00:20:47.611 "superblock": true, 00:20:47.611 "num_base_bdevs": 3, 00:20:47.611 "num_base_bdevs_discovered": 2, 00:20:47.611 "num_base_bdevs_operational": 3, 00:20:47.611 "base_bdevs_list": [ 00:20:47.611 { 00:20:47.611 "name": "BaseBdev1", 00:20:47.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.611 "is_configured": false, 00:20:47.611 "data_offset": 0, 00:20:47.611 "data_size": 0 00:20:47.611 }, 00:20:47.611 { 00:20:47.611 "name": "BaseBdev2", 00:20:47.611 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:47.611 "is_configured": true, 00:20:47.611 "data_offset": 2048, 00:20:47.611 "data_size": 63488 00:20:47.611 }, 00:20:47.611 { 00:20:47.611 "name": "BaseBdev3", 00:20:47.611 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:47.611 "is_configured": true, 00:20:47.611 "data_offset": 2048, 00:20:47.611 "data_size": 63488 00:20:47.611 } 00:20:47.611 ] 00:20:47.611 }' 00:20:47.611 23:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.611 23:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.872 [2024-12-09 23:02:23.009028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.872 "name": "Existed_Raid", 00:20:47.872 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:47.872 "strip_size_kb": 64, 00:20:47.872 "state": "configuring", 00:20:47.872 "raid_level": "concat", 00:20:47.872 "superblock": true, 00:20:47.872 "num_base_bdevs": 3, 00:20:47.872 "num_base_bdevs_discovered": 1, 00:20:47.872 "num_base_bdevs_operational": 3, 00:20:47.872 "base_bdevs_list": [ 00:20:47.872 { 00:20:47.872 "name": "BaseBdev1", 00:20:47.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.872 "is_configured": false, 00:20:47.872 "data_offset": 0, 00:20:47.872 "data_size": 0 00:20:47.872 }, 00:20:47.872 { 00:20:47.872 "name": null, 00:20:47.872 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:47.872 "is_configured": false, 00:20:47.872 "data_offset": 0, 00:20:47.872 "data_size": 63488 00:20:47.872 }, 00:20:47.872 { 00:20:47.872 "name": "BaseBdev3", 00:20:47.872 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:47.872 "is_configured": true, 00:20:47.872 "data_offset": 2048, 00:20:47.872 "data_size": 63488 00:20:47.872 } 00:20:47.872 ] 00:20:47.872 }' 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.872 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 [2024-12-09 23:02:23.385744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.134 BaseBdev1 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 [ 00:20:48.134 { 00:20:48.134 "name": "BaseBdev1", 00:20:48.134 "aliases": [ 00:20:48.134 "0ba4f7ee-561d-470e-8006-107359c943ec" 00:20:48.134 ], 00:20:48.134 "product_name": "Malloc disk", 00:20:48.134 "block_size": 512, 00:20:48.134 "num_blocks": 65536, 00:20:48.134 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:48.134 "assigned_rate_limits": { 00:20:48.134 "rw_ios_per_sec": 0, 00:20:48.134 "rw_mbytes_per_sec": 0, 00:20:48.134 "r_mbytes_per_sec": 0, 00:20:48.134 "w_mbytes_per_sec": 0 00:20:48.134 }, 00:20:48.134 "claimed": true, 00:20:48.134 "claim_type": "exclusive_write", 00:20:48.134 "zoned": false, 00:20:48.134 "supported_io_types": { 00:20:48.134 "read": true, 00:20:48.134 "write": true, 00:20:48.134 "unmap": true, 00:20:48.134 "flush": true, 00:20:48.134 "reset": true, 00:20:48.134 "nvme_admin": false, 00:20:48.134 "nvme_io": false, 00:20:48.134 "nvme_io_md": false, 00:20:48.134 "write_zeroes": true, 00:20:48.134 "zcopy": true, 00:20:48.134 "get_zone_info": false, 00:20:48.134 "zone_management": false, 00:20:48.134 "zone_append": false, 00:20:48.134 "compare": false, 00:20:48.134 "compare_and_write": false, 00:20:48.134 "abort": true, 00:20:48.134 "seek_hole": false, 00:20:48.134 "seek_data": false, 00:20:48.134 "copy": true, 00:20:48.134 "nvme_iov_md": false 00:20:48.134 }, 00:20:48.134 "memory_domains": [ 00:20:48.134 { 00:20:48.134 "dma_device_id": "system", 00:20:48.134 "dma_device_type": 1 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.134 "dma_device_type": 2 00:20:48.134 } 00:20:48.134 ], 00:20:48.134 "driver_specific": {} 00:20:48.134 } 00:20:48.134 ] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.134 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.135 "name": "Existed_Raid", 00:20:48.135 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:48.135 "strip_size_kb": 64, 00:20:48.135 "state": "configuring", 00:20:48.135 "raid_level": "concat", 00:20:48.135 "superblock": true, 00:20:48.135 "num_base_bdevs": 3, 00:20:48.135 "num_base_bdevs_discovered": 2, 00:20:48.135 "num_base_bdevs_operational": 3, 00:20:48.135 "base_bdevs_list": [ 00:20:48.135 { 00:20:48.135 "name": "BaseBdev1", 00:20:48.135 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:48.135 "is_configured": true, 00:20:48.135 "data_offset": 2048, 00:20:48.135 "data_size": 63488 00:20:48.135 }, 00:20:48.135 { 00:20:48.135 "name": null, 00:20:48.135 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:48.135 "is_configured": false, 00:20:48.135 "data_offset": 0, 00:20:48.135 "data_size": 63488 00:20:48.135 }, 00:20:48.135 { 00:20:48.135 "name": "BaseBdev3", 00:20:48.135 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:48.135 "is_configured": true, 00:20:48.135 "data_offset": 2048, 00:20:48.135 "data_size": 63488 00:20:48.135 } 00:20:48.135 ] 00:20:48.135 }' 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.135 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.396 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.396 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:48.396 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.396 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.657 [2024-12-09 23:02:23.781925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.657 "name": "Existed_Raid", 00:20:48.657 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:48.657 "strip_size_kb": 64, 00:20:48.657 "state": "configuring", 00:20:48.657 "raid_level": "concat", 00:20:48.657 "superblock": true, 00:20:48.657 "num_base_bdevs": 3, 00:20:48.657 "num_base_bdevs_discovered": 1, 00:20:48.657 "num_base_bdevs_operational": 3, 00:20:48.657 "base_bdevs_list": [ 00:20:48.657 { 00:20:48.657 "name": "BaseBdev1", 00:20:48.657 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:48.657 "is_configured": true, 00:20:48.657 "data_offset": 2048, 00:20:48.657 "data_size": 63488 00:20:48.657 }, 00:20:48.657 { 00:20:48.657 "name": null, 00:20:48.657 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:48.657 "is_configured": false, 00:20:48.657 "data_offset": 0, 00:20:48.657 "data_size": 63488 00:20:48.657 }, 00:20:48.657 { 00:20:48.657 "name": null, 00:20:48.657 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:48.657 "is_configured": false, 00:20:48.657 "data_offset": 0, 00:20:48.657 "data_size": 63488 00:20:48.657 } 00:20:48.657 ] 00:20:48.657 }' 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.657 23:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.918 [2024-12-09 23:02:24.146059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.918 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.919 "name": "Existed_Raid", 00:20:48.919 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:48.919 "strip_size_kb": 64, 00:20:48.919 "state": "configuring", 00:20:48.919 "raid_level": "concat", 00:20:48.919 "superblock": true, 00:20:48.919 "num_base_bdevs": 3, 00:20:48.919 "num_base_bdevs_discovered": 2, 00:20:48.919 "num_base_bdevs_operational": 3, 00:20:48.919 "base_bdevs_list": [ 00:20:48.919 { 00:20:48.919 "name": "BaseBdev1", 00:20:48.919 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:48.919 "is_configured": true, 00:20:48.919 "data_offset": 2048, 00:20:48.919 "data_size": 63488 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": null, 00:20:48.919 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:48.919 "is_configured": false, 00:20:48.919 "data_offset": 0, 00:20:48.919 "data_size": 63488 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "BaseBdev3", 00:20:48.919 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:48.919 "is_configured": true, 00:20:48.919 "data_offset": 2048, 00:20:48.919 "data_size": 63488 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 }' 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.919 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.178 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.438 [2024-12-09 23:02:24.542190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.438 "name": "Existed_Raid", 00:20:49.438 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:49.438 "strip_size_kb": 64, 00:20:49.438 "state": "configuring", 00:20:49.438 "raid_level": "concat", 00:20:49.438 "superblock": true, 00:20:49.438 "num_base_bdevs": 3, 00:20:49.438 "num_base_bdevs_discovered": 1, 00:20:49.438 "num_base_bdevs_operational": 3, 00:20:49.438 "base_bdevs_list": [ 00:20:49.438 { 00:20:49.438 "name": null, 00:20:49.438 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:49.438 "is_configured": false, 00:20:49.438 "data_offset": 0, 00:20:49.438 "data_size": 63488 00:20:49.438 }, 00:20:49.438 { 00:20:49.438 "name": null, 00:20:49.438 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:49.438 "is_configured": false, 00:20:49.438 "data_offset": 0, 00:20:49.438 "data_size": 63488 00:20:49.438 }, 00:20:49.438 { 00:20:49.438 "name": "BaseBdev3", 00:20:49.438 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:49.438 "is_configured": true, 00:20:49.438 "data_offset": 2048, 00:20:49.438 "data_size": 63488 00:20:49.438 } 00:20:49.438 ] 00:20:49.438 }' 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.438 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.700 [2024-12-09 23:02:24.978355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.700 23:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.700 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.700 "name": "Existed_Raid", 00:20:49.700 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:49.700 "strip_size_kb": 64, 00:20:49.700 "state": "configuring", 00:20:49.700 "raid_level": "concat", 00:20:49.700 "superblock": true, 00:20:49.700 "num_base_bdevs": 3, 00:20:49.700 "num_base_bdevs_discovered": 2, 00:20:49.700 "num_base_bdevs_operational": 3, 00:20:49.700 "base_bdevs_list": [ 00:20:49.700 { 00:20:49.700 "name": null, 00:20:49.700 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:49.700 "is_configured": false, 00:20:49.700 "data_offset": 0, 00:20:49.700 "data_size": 63488 00:20:49.700 }, 00:20:49.700 { 00:20:49.700 "name": "BaseBdev2", 00:20:49.700 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:49.700 "is_configured": true, 00:20:49.700 "data_offset": 2048, 00:20:49.700 "data_size": 63488 00:20:49.700 }, 00:20:49.700 { 00:20:49.700 "name": "BaseBdev3", 00:20:49.700 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:49.700 "is_configured": true, 00:20:49.700 "data_offset": 2048, 00:20:49.700 "data_size": 63488 00:20:49.700 } 00:20:49.700 ] 00:20:49.700 }' 00:20:49.700 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.700 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.961 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.961 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.961 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.961 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ba4f7ee-561d-470e-8006-107359c943ec 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.222 [2024-12-09 23:02:25.418548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:50.222 NewBaseBdev 00:20:50.222 [2024-12-09 23:02:25.419055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:50.222 [2024-12-09 23:02:25.419087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:50.222 [2024-12-09 23:02:25.419406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:50.222 [2024-12-09 23:02:25.419560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:50.222 [2024-12-09 23:02:25.419569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:50.222 [2024-12-09 23:02:25.419732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.222 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.222 [ 00:20:50.222 { 00:20:50.222 "name": "NewBaseBdev", 00:20:50.222 "aliases": [ 00:20:50.222 "0ba4f7ee-561d-470e-8006-107359c943ec" 00:20:50.222 ], 00:20:50.222 "product_name": "Malloc disk", 00:20:50.222 "block_size": 512, 00:20:50.222 "num_blocks": 65536, 00:20:50.222 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:50.222 "assigned_rate_limits": { 00:20:50.222 "rw_ios_per_sec": 0, 00:20:50.222 "rw_mbytes_per_sec": 0, 00:20:50.222 "r_mbytes_per_sec": 0, 00:20:50.222 "w_mbytes_per_sec": 0 00:20:50.222 }, 00:20:50.222 "claimed": true, 00:20:50.222 "claim_type": "exclusive_write", 00:20:50.222 "zoned": false, 00:20:50.222 "supported_io_types": { 00:20:50.222 "read": true, 00:20:50.223 "write": true, 00:20:50.223 "unmap": true, 00:20:50.223 "flush": true, 00:20:50.223 "reset": true, 00:20:50.223 "nvme_admin": false, 00:20:50.223 "nvme_io": false, 00:20:50.223 "nvme_io_md": false, 00:20:50.223 "write_zeroes": true, 00:20:50.223 "zcopy": true, 00:20:50.223 "get_zone_info": false, 00:20:50.223 "zone_management": false, 00:20:50.223 "zone_append": false, 00:20:50.223 "compare": false, 00:20:50.223 "compare_and_write": false, 00:20:50.223 "abort": true, 00:20:50.223 "seek_hole": false, 00:20:50.223 "seek_data": false, 00:20:50.223 "copy": true, 00:20:50.223 "nvme_iov_md": false 00:20:50.223 }, 00:20:50.223 "memory_domains": [ 00:20:50.223 { 00:20:50.223 "dma_device_id": "system", 00:20:50.223 "dma_device_type": 1 00:20:50.223 }, 00:20:50.223 { 00:20:50.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.223 "dma_device_type": 2 00:20:50.223 } 00:20:50.223 ], 00:20:50.223 "driver_specific": {} 00:20:50.223 } 00:20:50.223 ] 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.223 "name": "Existed_Raid", 00:20:50.223 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:50.223 "strip_size_kb": 64, 00:20:50.223 "state": "online", 00:20:50.223 "raid_level": "concat", 00:20:50.223 "superblock": true, 00:20:50.223 "num_base_bdevs": 3, 00:20:50.223 "num_base_bdevs_discovered": 3, 00:20:50.223 "num_base_bdevs_operational": 3, 00:20:50.223 "base_bdevs_list": [ 00:20:50.223 { 00:20:50.223 "name": "NewBaseBdev", 00:20:50.223 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:50.223 "is_configured": true, 00:20:50.223 "data_offset": 2048, 00:20:50.223 "data_size": 63488 00:20:50.223 }, 00:20:50.223 { 00:20:50.223 "name": "BaseBdev2", 00:20:50.223 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:50.223 "is_configured": true, 00:20:50.223 "data_offset": 2048, 00:20:50.223 "data_size": 63488 00:20:50.223 }, 00:20:50.223 { 00:20:50.223 "name": "BaseBdev3", 00:20:50.223 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:50.223 "is_configured": true, 00:20:50.223 "data_offset": 2048, 00:20:50.223 "data_size": 63488 00:20:50.223 } 00:20:50.223 ] 00:20:50.223 }' 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.223 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:50.485 [2024-12-09 23:02:25.771053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:50.485 "name": "Existed_Raid", 00:20:50.485 "aliases": [ 00:20:50.485 "eaada6a2-2daf-4d17-97be-e68f140cd1c2" 00:20:50.485 ], 00:20:50.485 "product_name": "Raid Volume", 00:20:50.485 "block_size": 512, 00:20:50.485 "num_blocks": 190464, 00:20:50.485 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:50.485 "assigned_rate_limits": { 00:20:50.485 "rw_ios_per_sec": 0, 00:20:50.485 "rw_mbytes_per_sec": 0, 00:20:50.485 "r_mbytes_per_sec": 0, 00:20:50.485 "w_mbytes_per_sec": 0 00:20:50.485 }, 00:20:50.485 "claimed": false, 00:20:50.485 "zoned": false, 00:20:50.485 "supported_io_types": { 00:20:50.485 "read": true, 00:20:50.485 "write": true, 00:20:50.485 "unmap": true, 00:20:50.485 "flush": true, 00:20:50.485 "reset": true, 00:20:50.485 "nvme_admin": false, 00:20:50.485 "nvme_io": false, 00:20:50.485 "nvme_io_md": false, 00:20:50.485 "write_zeroes": true, 00:20:50.485 "zcopy": false, 00:20:50.485 "get_zone_info": false, 00:20:50.485 "zone_management": false, 00:20:50.485 "zone_append": false, 00:20:50.485 "compare": false, 00:20:50.485 "compare_and_write": false, 00:20:50.485 "abort": false, 00:20:50.485 "seek_hole": false, 00:20:50.485 "seek_data": false, 00:20:50.485 "copy": false, 00:20:50.485 "nvme_iov_md": false 00:20:50.485 }, 00:20:50.485 "memory_domains": [ 00:20:50.485 { 00:20:50.485 "dma_device_id": "system", 00:20:50.485 "dma_device_type": 1 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.485 "dma_device_type": 2 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "dma_device_id": "system", 00:20:50.485 "dma_device_type": 1 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.485 "dma_device_type": 2 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "dma_device_id": "system", 00:20:50.485 "dma_device_type": 1 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.485 "dma_device_type": 2 00:20:50.485 } 00:20:50.485 ], 00:20:50.485 "driver_specific": { 00:20:50.485 "raid": { 00:20:50.485 "uuid": "eaada6a2-2daf-4d17-97be-e68f140cd1c2", 00:20:50.485 "strip_size_kb": 64, 00:20:50.485 "state": "online", 00:20:50.485 "raid_level": "concat", 00:20:50.485 "superblock": true, 00:20:50.485 "num_base_bdevs": 3, 00:20:50.485 "num_base_bdevs_discovered": 3, 00:20:50.485 "num_base_bdevs_operational": 3, 00:20:50.485 "base_bdevs_list": [ 00:20:50.485 { 00:20:50.485 "name": "NewBaseBdev", 00:20:50.485 "uuid": "0ba4f7ee-561d-470e-8006-107359c943ec", 00:20:50.485 "is_configured": true, 00:20:50.485 "data_offset": 2048, 00:20:50.485 "data_size": 63488 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "name": "BaseBdev2", 00:20:50.485 "uuid": "ede28e0c-2591-48c3-8415-69e6353b9774", 00:20:50.485 "is_configured": true, 00:20:50.485 "data_offset": 2048, 00:20:50.485 "data_size": 63488 00:20:50.485 }, 00:20:50.485 { 00:20:50.485 "name": "BaseBdev3", 00:20:50.485 "uuid": "dc4b0524-9d96-4414-b728-dbcf3df4990d", 00:20:50.485 "is_configured": true, 00:20:50.485 "data_offset": 2048, 00:20:50.485 "data_size": 63488 00:20:50.485 } 00:20:50.485 ] 00:20:50.485 } 00:20:50.485 } 00:20:50.485 }' 00:20:50.485 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:50.747 BaseBdev2 00:20:50.747 BaseBdev3' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:50.747 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.748 [2024-12-09 23:02:25.978742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:50.748 [2024-12-09 23:02:25.978931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.748 [2024-12-09 23:02:25.979171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.748 [2024-12-09 23:02:25.979311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.748 [2024-12-09 23:02:25.979333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64637 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64637 ']' 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64637 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.748 23:02:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64637 00:20:50.748 killing process with pid 64637 00:20:50.748 23:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.748 23:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.748 23:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64637' 00:20:50.748 23:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64637 00:20:50.748 [2024-12-09 23:02:26.009511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.748 23:02:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64637 00:20:51.007 [2024-12-09 23:02:26.226336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.959 23:02:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:51.959 00:20:51.959 real 0m8.207s 00:20:51.959 user 0m12.683s 00:20:51.959 sys 0m1.533s 00:20:51.959 23:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.959 23:02:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.959 ************************************ 00:20:51.959 END TEST raid_state_function_test_sb 00:20:51.959 ************************************ 00:20:51.959 23:02:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:20:51.959 23:02:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:51.959 23:02:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.959 23:02:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.959 ************************************ 00:20:51.959 START TEST raid_superblock_test 00:20:51.959 ************************************ 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65235 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65235 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65235 ']' 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.959 23:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.959 [2024-12-09 23:02:27.223452] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:51.959 [2024-12-09 23:02:27.223882] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65235 ] 00:20:52.238 [2024-12-09 23:02:27.386525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.500 [2024-12-09 23:02:27.534467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.500 [2024-12-09 23:02:27.703581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.500 [2024-12-09 23:02:27.703637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.762 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 malloc1 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 [2024-12-09 23:02:28.152250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:53.023 [2024-12-09 23:02:28.152507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.023 [2024-12-09 23:02:28.152566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:53.023 [2024-12-09 23:02:28.152653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.023 [2024-12-09 23:02:28.155275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.023 [2024-12-09 23:02:28.155462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:53.023 pt1 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 malloc2 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 [2024-12-09 23:02:28.198394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:53.023 [2024-12-09 23:02:28.198627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.023 [2024-12-09 23:02:28.198685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:53.023 [2024-12-09 23:02:28.199336] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.023 [2024-12-09 23:02:28.201956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.023 [2024-12-09 23:02:28.202014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:53.023 pt2 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 malloc3 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.023 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.023 [2024-12-09 23:02:28.249744] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:53.023 [2024-12-09 23:02:28.249825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.023 [2024-12-09 23:02:28.249852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:53.023 [2024-12-09 23:02:28.249863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.023 [2024-12-09 23:02:28.252408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.023 pt3 00:20:53.023 [2024-12-09 23:02:28.252613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.024 [2024-12-09 23:02:28.257800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.024 [2024-12-09 23:02:28.260048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.024 [2024-12-09 23:02:28.260316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:53.024 [2024-12-09 23:02:28.260511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:53.024 [2024-12-09 23:02:28.260526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:53.024 [2024-12-09 23:02:28.260858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:53.024 [2024-12-09 23:02:28.261073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:53.024 [2024-12-09 23:02:28.261084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:53.024 [2024-12-09 23:02:28.261274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.024 "name": "raid_bdev1", 00:20:53.024 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:53.024 "strip_size_kb": 64, 00:20:53.024 "state": "online", 00:20:53.024 "raid_level": "concat", 00:20:53.024 "superblock": true, 00:20:53.024 "num_base_bdevs": 3, 00:20:53.024 "num_base_bdevs_discovered": 3, 00:20:53.024 "num_base_bdevs_operational": 3, 00:20:53.024 "base_bdevs_list": [ 00:20:53.024 { 00:20:53.024 "name": "pt1", 00:20:53.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:53.024 "is_configured": true, 00:20:53.024 "data_offset": 2048, 00:20:53.024 "data_size": 63488 00:20:53.024 }, 00:20:53.024 { 00:20:53.024 "name": "pt2", 00:20:53.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.024 "is_configured": true, 00:20:53.024 "data_offset": 2048, 00:20:53.024 "data_size": 63488 00:20:53.024 }, 00:20:53.024 { 00:20:53.024 "name": "pt3", 00:20:53.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:53.024 "is_configured": true, 00:20:53.024 "data_offset": 2048, 00:20:53.024 "data_size": 63488 00:20:53.024 } 00:20:53.024 ] 00:20:53.024 }' 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.024 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.284 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.284 [2024-12-09 23:02:28.634249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.578 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.578 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:53.578 "name": "raid_bdev1", 00:20:53.578 "aliases": [ 00:20:53.578 "3a706014-e2f7-405d-b939-590aed4aaab7" 00:20:53.578 ], 00:20:53.578 "product_name": "Raid Volume", 00:20:53.578 "block_size": 512, 00:20:53.578 "num_blocks": 190464, 00:20:53.578 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:53.578 "assigned_rate_limits": { 00:20:53.578 "rw_ios_per_sec": 0, 00:20:53.578 "rw_mbytes_per_sec": 0, 00:20:53.578 "r_mbytes_per_sec": 0, 00:20:53.578 "w_mbytes_per_sec": 0 00:20:53.578 }, 00:20:53.578 "claimed": false, 00:20:53.578 "zoned": false, 00:20:53.578 "supported_io_types": { 00:20:53.578 "read": true, 00:20:53.578 "write": true, 00:20:53.578 "unmap": true, 00:20:53.578 "flush": true, 00:20:53.578 "reset": true, 00:20:53.578 "nvme_admin": false, 00:20:53.578 "nvme_io": false, 00:20:53.578 "nvme_io_md": false, 00:20:53.578 "write_zeroes": true, 00:20:53.578 "zcopy": false, 00:20:53.578 "get_zone_info": false, 00:20:53.578 "zone_management": false, 00:20:53.578 "zone_append": false, 00:20:53.578 "compare": false, 00:20:53.578 "compare_and_write": false, 00:20:53.578 "abort": false, 00:20:53.578 "seek_hole": false, 00:20:53.578 "seek_data": false, 00:20:53.578 "copy": false, 00:20:53.578 "nvme_iov_md": false 00:20:53.578 }, 00:20:53.578 "memory_domains": [ 00:20:53.578 { 00:20:53.578 "dma_device_id": "system", 00:20:53.578 "dma_device_type": 1 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.578 "dma_device_type": 2 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "dma_device_id": "system", 00:20:53.578 "dma_device_type": 1 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.578 "dma_device_type": 2 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "dma_device_id": "system", 00:20:53.578 "dma_device_type": 1 00:20:53.578 }, 00:20:53.578 { 00:20:53.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.578 "dma_device_type": 2 00:20:53.578 } 00:20:53.578 ], 00:20:53.578 "driver_specific": { 00:20:53.578 "raid": { 00:20:53.579 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:53.579 "strip_size_kb": 64, 00:20:53.579 "state": "online", 00:20:53.579 "raid_level": "concat", 00:20:53.579 "superblock": true, 00:20:53.579 "num_base_bdevs": 3, 00:20:53.579 "num_base_bdevs_discovered": 3, 00:20:53.579 "num_base_bdevs_operational": 3, 00:20:53.579 "base_bdevs_list": [ 00:20:53.579 { 00:20:53.579 "name": "pt1", 00:20:53.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:53.579 "is_configured": true, 00:20:53.579 "data_offset": 2048, 00:20:53.579 "data_size": 63488 00:20:53.579 }, 00:20:53.579 { 00:20:53.579 "name": "pt2", 00:20:53.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.579 "is_configured": true, 00:20:53.579 "data_offset": 2048, 00:20:53.579 "data_size": 63488 00:20:53.579 }, 00:20:53.579 { 00:20:53.579 "name": "pt3", 00:20:53.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:53.579 "is_configured": true, 00:20:53.579 "data_offset": 2048, 00:20:53.579 "data_size": 63488 00:20:53.579 } 00:20:53.579 ] 00:20:53.579 } 00:20:53.579 } 00:20:53.579 }' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:53.579 pt2 00:20:53.579 pt3' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 [2024-12-09 23:02:28.838241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a706014-e2f7-405d-b939-590aed4aaab7 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a706014-e2f7-405d-b939-590aed4aaab7 ']' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 [2024-12-09 23:02:28.869892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.579 [2024-12-09 23:02:28.870059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.579 [2024-12-09 23:02:28.870240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.579 [2024-12-09 23:02:28.870512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.579 [2024-12-09 23:02:28.870618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.579 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 [2024-12-09 23:02:28.981989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:53.845 [2024-12-09 23:02:28.984436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:53.845 [2024-12-09 23:02:28.984509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:53.845 [2024-12-09 23:02:28.984575] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:53.845 [2024-12-09 23:02:28.984652] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:53.845 [2024-12-09 23:02:28.984672] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:53.845 [2024-12-09 23:02:28.984693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.845 [2024-12-09 23:02:28.984704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:53.845 request: 00:20:53.845 { 00:20:53.845 "name": "raid_bdev1", 00:20:53.845 "raid_level": "concat", 00:20:53.845 "base_bdevs": [ 00:20:53.845 "malloc1", 00:20:53.845 "malloc2", 00:20:53.845 "malloc3" 00:20:53.845 ], 00:20:53.845 "strip_size_kb": 64, 00:20:53.845 "superblock": false, 00:20:53.845 "method": "bdev_raid_create", 00:20:53.845 "req_id": 1 00:20:53.845 } 00:20:53.845 Got JSON-RPC error response 00:20:53.845 response: 00:20:53.845 { 00:20:53.845 "code": -17, 00:20:53.845 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:53.845 } 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:53.845 23:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 [2024-12-09 23:02:29.025937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:53.845 [2024-12-09 23:02:29.026171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.845 [2024-12-09 23:02:29.026224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:53.845 [2024-12-09 23:02:29.026288] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.845 [2024-12-09 23:02:29.029149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.845 [2024-12-09 23:02:29.029317] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:53.845 [2024-12-09 23:02:29.029499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:53.845 [2024-12-09 23:02:29.029590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.845 pt1 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.845 "name": "raid_bdev1", 00:20:53.845 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:53.845 "strip_size_kb": 64, 00:20:53.845 "state": "configuring", 00:20:53.845 "raid_level": "concat", 00:20:53.845 "superblock": true, 00:20:53.845 "num_base_bdevs": 3, 00:20:53.845 "num_base_bdevs_discovered": 1, 00:20:53.845 "num_base_bdevs_operational": 3, 00:20:53.845 "base_bdevs_list": [ 00:20:53.845 { 00:20:53.845 "name": "pt1", 00:20:53.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:53.845 "is_configured": true, 00:20:53.845 "data_offset": 2048, 00:20:53.845 "data_size": 63488 00:20:53.845 }, 00:20:53.845 { 00:20:53.845 "name": null, 00:20:53.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:53.845 "is_configured": false, 00:20:53.845 "data_offset": 2048, 00:20:53.845 "data_size": 63488 00:20:53.845 }, 00:20:53.845 { 00:20:53.845 "name": null, 00:20:53.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:53.845 "is_configured": false, 00:20:53.845 "data_offset": 2048, 00:20:53.845 "data_size": 63488 00:20:53.845 } 00:20:53.845 ] 00:20:53.845 }' 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.845 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.116 [2024-12-09 23:02:29.382021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:54.116 [2024-12-09 23:02:29.382262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.116 [2024-12-09 23:02:29.382302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:54.116 [2024-12-09 23:02:29.382314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.116 [2024-12-09 23:02:29.382807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.116 [2024-12-09 23:02:29.382824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:54.116 [2024-12-09 23:02:29.382921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:54.116 [2024-12-09 23:02:29.382949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:54.116 pt2 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.116 [2024-12-09 23:02:29.390045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.116 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.116 "name": "raid_bdev1", 00:20:54.116 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:54.116 "strip_size_kb": 64, 00:20:54.116 "state": "configuring", 00:20:54.116 "raid_level": "concat", 00:20:54.116 "superblock": true, 00:20:54.116 "num_base_bdevs": 3, 00:20:54.116 "num_base_bdevs_discovered": 1, 00:20:54.116 "num_base_bdevs_operational": 3, 00:20:54.116 "base_bdevs_list": [ 00:20:54.116 { 00:20:54.116 "name": "pt1", 00:20:54.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.116 "is_configured": true, 00:20:54.116 "data_offset": 2048, 00:20:54.116 "data_size": 63488 00:20:54.116 }, 00:20:54.116 { 00:20:54.116 "name": null, 00:20:54.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.116 "is_configured": false, 00:20:54.116 "data_offset": 0, 00:20:54.116 "data_size": 63488 00:20:54.116 }, 00:20:54.116 { 00:20:54.116 "name": null, 00:20:54.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.116 "is_configured": false, 00:20:54.117 "data_offset": 2048, 00:20:54.117 "data_size": 63488 00:20:54.117 } 00:20:54.117 ] 00:20:54.117 }' 00:20:54.117 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.117 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.688 [2024-12-09 23:02:29.758096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:54.688 [2024-12-09 23:02:29.758349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.688 [2024-12-09 23:02:29.758380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:54.688 [2024-12-09 23:02:29.758392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.688 [2024-12-09 23:02:29.758931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.688 [2024-12-09 23:02:29.758963] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:54.688 [2024-12-09 23:02:29.759056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:54.688 [2024-12-09 23:02:29.759082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:54.688 pt2 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.688 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.688 [2024-12-09 23:02:29.766086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:54.688 [2024-12-09 23:02:29.766166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.688 [2024-12-09 23:02:29.766182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:54.688 [2024-12-09 23:02:29.766194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.688 [2024-12-09 23:02:29.766628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.688 [2024-12-09 23:02:29.766669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:54.688 [2024-12-09 23:02:29.766742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:54.688 [2024-12-09 23:02:29.766764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:54.688 [2024-12-09 23:02:29.766897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:54.688 [2024-12-09 23:02:29.766915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:54.688 [2024-12-09 23:02:29.767233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:54.688 [2024-12-09 23:02:29.767393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:54.688 [2024-12-09 23:02:29.767401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:54.688 [2024-12-09 23:02:29.767549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.689 pt3 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.689 "name": "raid_bdev1", 00:20:54.689 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:54.689 "strip_size_kb": 64, 00:20:54.689 "state": "online", 00:20:54.689 "raid_level": "concat", 00:20:54.689 "superblock": true, 00:20:54.689 "num_base_bdevs": 3, 00:20:54.689 "num_base_bdevs_discovered": 3, 00:20:54.689 "num_base_bdevs_operational": 3, 00:20:54.689 "base_bdevs_list": [ 00:20:54.689 { 00:20:54.689 "name": "pt1", 00:20:54.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.689 "is_configured": true, 00:20:54.689 "data_offset": 2048, 00:20:54.689 "data_size": 63488 00:20:54.689 }, 00:20:54.689 { 00:20:54.689 "name": "pt2", 00:20:54.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.689 "is_configured": true, 00:20:54.689 "data_offset": 2048, 00:20:54.689 "data_size": 63488 00:20:54.689 }, 00:20:54.689 { 00:20:54.689 "name": "pt3", 00:20:54.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.689 "is_configured": true, 00:20:54.689 "data_offset": 2048, 00:20:54.689 "data_size": 63488 00:20:54.689 } 00:20:54.689 ] 00:20:54.689 }' 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.689 23:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.949 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:54.950 [2024-12-09 23:02:30.138601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:54.950 "name": "raid_bdev1", 00:20:54.950 "aliases": [ 00:20:54.950 "3a706014-e2f7-405d-b939-590aed4aaab7" 00:20:54.950 ], 00:20:54.950 "product_name": "Raid Volume", 00:20:54.950 "block_size": 512, 00:20:54.950 "num_blocks": 190464, 00:20:54.950 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:54.950 "assigned_rate_limits": { 00:20:54.950 "rw_ios_per_sec": 0, 00:20:54.950 "rw_mbytes_per_sec": 0, 00:20:54.950 "r_mbytes_per_sec": 0, 00:20:54.950 "w_mbytes_per_sec": 0 00:20:54.950 }, 00:20:54.950 "claimed": false, 00:20:54.950 "zoned": false, 00:20:54.950 "supported_io_types": { 00:20:54.950 "read": true, 00:20:54.950 "write": true, 00:20:54.950 "unmap": true, 00:20:54.950 "flush": true, 00:20:54.950 "reset": true, 00:20:54.950 "nvme_admin": false, 00:20:54.950 "nvme_io": false, 00:20:54.950 "nvme_io_md": false, 00:20:54.950 "write_zeroes": true, 00:20:54.950 "zcopy": false, 00:20:54.950 "get_zone_info": false, 00:20:54.950 "zone_management": false, 00:20:54.950 "zone_append": false, 00:20:54.950 "compare": false, 00:20:54.950 "compare_and_write": false, 00:20:54.950 "abort": false, 00:20:54.950 "seek_hole": false, 00:20:54.950 "seek_data": false, 00:20:54.950 "copy": false, 00:20:54.950 "nvme_iov_md": false 00:20:54.950 }, 00:20:54.950 "memory_domains": [ 00:20:54.950 { 00:20:54.950 "dma_device_id": "system", 00:20:54.950 "dma_device_type": 1 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.950 "dma_device_type": 2 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "dma_device_id": "system", 00:20:54.950 "dma_device_type": 1 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.950 "dma_device_type": 2 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "dma_device_id": "system", 00:20:54.950 "dma_device_type": 1 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.950 "dma_device_type": 2 00:20:54.950 } 00:20:54.950 ], 00:20:54.950 "driver_specific": { 00:20:54.950 "raid": { 00:20:54.950 "uuid": "3a706014-e2f7-405d-b939-590aed4aaab7", 00:20:54.950 "strip_size_kb": 64, 00:20:54.950 "state": "online", 00:20:54.950 "raid_level": "concat", 00:20:54.950 "superblock": true, 00:20:54.950 "num_base_bdevs": 3, 00:20:54.950 "num_base_bdevs_discovered": 3, 00:20:54.950 "num_base_bdevs_operational": 3, 00:20:54.950 "base_bdevs_list": [ 00:20:54.950 { 00:20:54.950 "name": "pt1", 00:20:54.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.950 "is_configured": true, 00:20:54.950 "data_offset": 2048, 00:20:54.950 "data_size": 63488 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "name": "pt2", 00:20:54.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.950 "is_configured": true, 00:20:54.950 "data_offset": 2048, 00:20:54.950 "data_size": 63488 00:20:54.950 }, 00:20:54.950 { 00:20:54.950 "name": "pt3", 00:20:54.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.950 "is_configured": true, 00:20:54.950 "data_offset": 2048, 00:20:54.950 "data_size": 63488 00:20:54.950 } 00:20:54.950 ] 00:20:54.950 } 00:20:54.950 } 00:20:54.950 }' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:54.950 pt2 00:20:54.950 pt3' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:54.950 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:55.211 [2024-12-09 23:02:30.354615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a706014-e2f7-405d-b939-590aed4aaab7 '!=' 3a706014-e2f7-405d-b939-590aed4aaab7 ']' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65235 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65235 ']' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65235 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65235 00:20:55.211 killing process with pid 65235 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65235' 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65235 00:20:55.211 23:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65235 00:20:55.211 [2024-12-09 23:02:30.410438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:55.211 [2024-12-09 23:02:30.410558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:55.211 [2024-12-09 23:02:30.410633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:55.211 [2024-12-09 23:02:30.410647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:55.472 [2024-12-09 23:02:30.629490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.427 ************************************ 00:20:56.427 END TEST raid_superblock_test 00:20:56.427 ************************************ 00:20:56.427 23:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:56.427 00:20:56.427 real 0m4.321s 00:20:56.427 user 0m6.031s 00:20:56.427 sys 0m0.812s 00:20:56.427 23:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.427 23:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.427 23:02:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:20:56.427 23:02:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:56.427 23:02:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.427 23:02:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:56.427 ************************************ 00:20:56.427 START TEST raid_read_error_test 00:20:56.427 ************************************ 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:56.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sWTtRxB7wI 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65478 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65478 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65478 ']' 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.427 23:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:56.427 [2024-12-09 23:02:31.618491] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:20:56.427 [2024-12-09 23:02:31.618664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65478 ] 00:20:56.427 [2024-12-09 23:02:31.781268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.691 [2024-12-09 23:02:31.923143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.953 [2024-12-09 23:02:32.087354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.953 [2024-12-09 23:02:32.087410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.214 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.214 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:57.214 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:57.214 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:57.214 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.214 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 BaseBdev1_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 true 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 [2024-12-09 23:02:32.624396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:57.483 [2024-12-09 23:02:32.624641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.483 [2024-12-09 23:02:32.624677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:57.483 [2024-12-09 23:02:32.624692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.483 [2024-12-09 23:02:32.627317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.483 BaseBdev1 00:20:57.483 [2024-12-09 23:02:32.627507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 BaseBdev2_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 true 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 [2024-12-09 23:02:32.685501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:57.483 [2024-12-09 23:02:32.685712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.483 [2024-12-09 23:02:32.685757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:57.483 [2024-12-09 23:02:32.685950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.483 [2024-12-09 23:02:32.688529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.483 [2024-12-09 23:02:32.688696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:57.483 BaseBdev2 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 BaseBdev3_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.483 true 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.483 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.484 [2024-12-09 23:02:32.750146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:57.484 [2024-12-09 23:02:32.750270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.484 [2024-12-09 23:02:32.750312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:57.484 [2024-12-09 23:02:32.750473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.484 [2024-12-09 23:02:32.753294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.484 [2024-12-09 23:02:32.753475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:57.484 BaseBdev3 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.484 [2024-12-09 23:02:32.758413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.484 [2024-12-09 23:02:32.760832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.484 [2024-12-09 23:02:32.761083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:57.484 [2024-12-09 23:02:32.761397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:57.484 [2024-12-09 23:02:32.761412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:57.484 [2024-12-09 23:02:32.761737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:57.484 [2024-12-09 23:02:32.761903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:57.484 [2024-12-09 23:02:32.761917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:57.484 [2024-12-09 23:02:32.762092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.484 "name": "raid_bdev1", 00:20:57.484 "uuid": "467c9884-58bc-4f08-b682-c7b94caca71f", 00:20:57.484 "strip_size_kb": 64, 00:20:57.484 "state": "online", 00:20:57.484 "raid_level": "concat", 00:20:57.484 "superblock": true, 00:20:57.484 "num_base_bdevs": 3, 00:20:57.484 "num_base_bdevs_discovered": 3, 00:20:57.484 "num_base_bdevs_operational": 3, 00:20:57.484 "base_bdevs_list": [ 00:20:57.484 { 00:20:57.484 "name": "BaseBdev1", 00:20:57.484 "uuid": "353331f6-c74a-50b0-8493-35a0f610b939", 00:20:57.484 "is_configured": true, 00:20:57.484 "data_offset": 2048, 00:20:57.484 "data_size": 63488 00:20:57.484 }, 00:20:57.484 { 00:20:57.484 "name": "BaseBdev2", 00:20:57.484 "uuid": "112e0595-19bd-51aa-bb96-8034f9a8b7dd", 00:20:57.484 "is_configured": true, 00:20:57.484 "data_offset": 2048, 00:20:57.484 "data_size": 63488 00:20:57.484 }, 00:20:57.484 { 00:20:57.484 "name": "BaseBdev3", 00:20:57.484 "uuid": "eac438e2-4684-50ec-a478-64b95ad47003", 00:20:57.484 "is_configured": true, 00:20:57.484 "data_offset": 2048, 00:20:57.484 "data_size": 63488 00:20:57.484 } 00:20:57.484 ] 00:20:57.484 }' 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.484 23:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.745 23:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:57.745 23:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:58.007 [2024-12-09 23:02:33.187598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.957 "name": "raid_bdev1", 00:20:58.957 "uuid": "467c9884-58bc-4f08-b682-c7b94caca71f", 00:20:58.957 "strip_size_kb": 64, 00:20:58.957 "state": "online", 00:20:58.957 "raid_level": "concat", 00:20:58.957 "superblock": true, 00:20:58.957 "num_base_bdevs": 3, 00:20:58.957 "num_base_bdevs_discovered": 3, 00:20:58.957 "num_base_bdevs_operational": 3, 00:20:58.957 "base_bdevs_list": [ 00:20:58.957 { 00:20:58.957 "name": "BaseBdev1", 00:20:58.957 "uuid": "353331f6-c74a-50b0-8493-35a0f610b939", 00:20:58.957 "is_configured": true, 00:20:58.957 "data_offset": 2048, 00:20:58.957 "data_size": 63488 00:20:58.957 }, 00:20:58.957 { 00:20:58.957 "name": "BaseBdev2", 00:20:58.957 "uuid": "112e0595-19bd-51aa-bb96-8034f9a8b7dd", 00:20:58.957 "is_configured": true, 00:20:58.957 "data_offset": 2048, 00:20:58.957 "data_size": 63488 00:20:58.957 }, 00:20:58.957 { 00:20:58.957 "name": "BaseBdev3", 00:20:58.957 "uuid": "eac438e2-4684-50ec-a478-64b95ad47003", 00:20:58.957 "is_configured": true, 00:20:58.957 "data_offset": 2048, 00:20:58.957 "data_size": 63488 00:20:58.957 } 00:20:58.957 ] 00:20:58.957 }' 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.957 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.270 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:59.270 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.270 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.270 [2024-12-09 23:02:34.471021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.270 [2024-12-09 23:02:34.471061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.270 [2024-12-09 23:02:34.474358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.270 [2024-12-09 23:02:34.474417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.270 [2024-12-09 23:02:34.474461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.271 [2024-12-09 23:02:34.474473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:59.271 { 00:20:59.271 "results": [ 00:20:59.271 { 00:20:59.271 "job": "raid_bdev1", 00:20:59.271 "core_mask": "0x1", 00:20:59.271 "workload": "randrw", 00:20:59.271 "percentage": 50, 00:20:59.271 "status": "finished", 00:20:59.271 "queue_depth": 1, 00:20:59.271 "io_size": 131072, 00:20:59.271 "runtime": 1.281048, 00:20:59.271 "iops": 12113.519555863637, 00:20:59.271 "mibps": 1514.1899444829546, 00:20:59.271 "io_failed": 1, 00:20:59.271 "io_timeout": 0, 00:20:59.271 "avg_latency_us": 114.22622115818326, 00:20:59.271 "min_latency_us": 34.46153846153846, 00:20:59.271 "max_latency_us": 1726.6215384615384 00:20:59.271 } 00:20:59.271 ], 00:20:59.271 "core_count": 1 00:20:59.271 } 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65478 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65478 ']' 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65478 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65478 00:20:59.271 killing process with pid 65478 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65478' 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65478 00:20:59.271 23:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65478 00:20:59.271 [2024-12-09 23:02:34.506887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.532 [2024-12-09 23:02:34.669975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sWTtRxB7wI 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:21:00.473 00:21:00.473 real 0m4.019s 00:21:00.473 user 0m4.730s 00:21:00.473 sys 0m0.536s 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.473 23:02:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.473 ************************************ 00:21:00.473 END TEST raid_read_error_test 00:21:00.473 ************************************ 00:21:00.473 23:02:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:21:00.473 23:02:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:00.473 23:02:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.473 23:02:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.473 ************************************ 00:21:00.473 START TEST raid_write_error_test 00:21:00.473 ************************************ 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.B2DJ6VSwgl 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65618 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65618 00:21:00.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65618 ']' 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.473 23:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:00.473 [2024-12-09 23:02:35.711339] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:00.473 [2024-12-09 23:02:35.711511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65618 ] 00:21:00.737 [2024-12-09 23:02:35.871719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.737 [2024-12-09 23:02:36.022138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.028 [2024-12-09 23:02:36.194691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.028 [2024-12-09 23:02:36.194778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.289 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.289 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:01.289 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:01.289 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:01.289 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.289 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 BaseBdev1_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 true 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 [2024-12-09 23:02:36.665672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:01.550 [2024-12-09 23:02:36.665962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.550 [2024-12-09 23:02:36.666001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:01.550 [2024-12-09 23:02:36.666014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.550 [2024-12-09 23:02:36.668733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.550 [2024-12-09 23:02:36.668835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:01.550 BaseBdev1 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 BaseBdev2_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 true 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 [2024-12-09 23:02:36.725946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:01.550 [2024-12-09 23:02:36.726252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.550 [2024-12-09 23:02:36.726310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:01.550 [2024-12-09 23:02:36.726389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.550 [2024-12-09 23:02:36.729572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.550 [2024-12-09 23:02:36.729825] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:01.550 BaseBdev2 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 BaseBdev3_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 true 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.550 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.550 [2024-12-09 23:02:36.800519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:01.550 [2024-12-09 23:02:36.800787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.550 [2024-12-09 23:02:36.800822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:01.550 [2024-12-09 23:02:36.800835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.550 [2024-12-09 23:02:36.803554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.550 [2024-12-09 23:02:36.803613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:01.550 BaseBdev3 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.551 [2024-12-09 23:02:36.812787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.551 [2024-12-09 23:02:36.815906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.551 [2024-12-09 23:02:36.816275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.551 [2024-12-09 23:02:36.816695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:01.551 [2024-12-09 23:02:36.816927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:01.551 [2024-12-09 23:02:36.817438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:21:01.551 [2024-12-09 23:02:36.817916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:01.551 [2024-12-09 23:02:36.817944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:01.551 [2024-12-09 23:02:36.818238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.551 "name": "raid_bdev1", 00:21:01.551 "uuid": "b5b79549-7ee9-43bb-ab63-e829f5899d45", 00:21:01.551 "strip_size_kb": 64, 00:21:01.551 "state": "online", 00:21:01.551 "raid_level": "concat", 00:21:01.551 "superblock": true, 00:21:01.551 "num_base_bdevs": 3, 00:21:01.551 "num_base_bdevs_discovered": 3, 00:21:01.551 "num_base_bdevs_operational": 3, 00:21:01.551 "base_bdevs_list": [ 00:21:01.551 { 00:21:01.551 "name": "BaseBdev1", 00:21:01.551 "uuid": "7bfb2dfc-2170-5313-93c0-b2caff245fb5", 00:21:01.551 "is_configured": true, 00:21:01.551 "data_offset": 2048, 00:21:01.551 "data_size": 63488 00:21:01.551 }, 00:21:01.551 { 00:21:01.551 "name": "BaseBdev2", 00:21:01.551 "uuid": "97e8608a-5f10-518e-acb4-de9431eb7bb7", 00:21:01.551 "is_configured": true, 00:21:01.551 "data_offset": 2048, 00:21:01.551 "data_size": 63488 00:21:01.551 }, 00:21:01.551 { 00:21:01.551 "name": "BaseBdev3", 00:21:01.551 "uuid": "9ce9aba9-90dc-54b0-9a9a-66e6539d4e88", 00:21:01.551 "is_configured": true, 00:21:01.551 "data_offset": 2048, 00:21:01.551 "data_size": 63488 00:21:01.551 } 00:21:01.551 ] 00:21:01.551 }' 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.551 23:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.122 23:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:02.122 23:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:02.122 [2024-12-09 23:02:37.281971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.065 "name": "raid_bdev1", 00:21:03.065 "uuid": "b5b79549-7ee9-43bb-ab63-e829f5899d45", 00:21:03.065 "strip_size_kb": 64, 00:21:03.065 "state": "online", 00:21:03.065 "raid_level": "concat", 00:21:03.065 "superblock": true, 00:21:03.065 "num_base_bdevs": 3, 00:21:03.065 "num_base_bdevs_discovered": 3, 00:21:03.065 "num_base_bdevs_operational": 3, 00:21:03.065 "base_bdevs_list": [ 00:21:03.065 { 00:21:03.065 "name": "BaseBdev1", 00:21:03.065 "uuid": "7bfb2dfc-2170-5313-93c0-b2caff245fb5", 00:21:03.065 "is_configured": true, 00:21:03.065 "data_offset": 2048, 00:21:03.065 "data_size": 63488 00:21:03.065 }, 00:21:03.065 { 00:21:03.065 "name": "BaseBdev2", 00:21:03.065 "uuid": "97e8608a-5f10-518e-acb4-de9431eb7bb7", 00:21:03.065 "is_configured": true, 00:21:03.065 "data_offset": 2048, 00:21:03.065 "data_size": 63488 00:21:03.065 }, 00:21:03.065 { 00:21:03.065 "name": "BaseBdev3", 00:21:03.065 "uuid": "9ce9aba9-90dc-54b0-9a9a-66e6539d4e88", 00:21:03.065 "is_configured": true, 00:21:03.065 "data_offset": 2048, 00:21:03.065 "data_size": 63488 00:21:03.065 } 00:21:03.065 ] 00:21:03.065 }' 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.065 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.328 [2024-12-09 23:02:38.563427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.328 [2024-12-09 23:02:38.563646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:03.328 [2024-12-09 23:02:38.567029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.328 [2024-12-09 23:02:38.567257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.328 [2024-12-09 23:02:38.567342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:21:03.328 "results": [ 00:21:03.328 { 00:21:03.328 "job": "raid_bdev1", 00:21:03.328 "core_mask": "0x1", 00:21:03.328 "workload": "randrw", 00:21:03.328 "percentage": 50, 00:21:03.328 "status": "finished", 00:21:03.328 "queue_depth": 1, 00:21:03.328 "io_size": 131072, 00:21:03.328 "runtime": 1.279562, 00:21:03.328 "iops": 11097.547442015315, 00:21:03.328 "mibps": 1387.1934302519144, 00:21:03.328 "io_failed": 1, 00:21:03.328 "io_timeout": 0, 00:21:03.328 "avg_latency_us": 125.11265923851515, 00:21:03.328 "min_latency_us": 34.26461538461538, 00:21:03.328 "max_latency_us": 1764.4307692307693 00:21:03.328 } 00:21:03.328 ], 00:21:03.328 "core_count": 1 00:21:03.328 } 00:21:03.328 ee all in destruct 00:21:03.328 [2024-12-09 23:02:38.567447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65618 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65618 ']' 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65618 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65618 00:21:03.328 killing process with pid 65618 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65618' 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65618 00:21:03.328 23:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65618 00:21:03.328 [2024-12-09 23:02:38.598383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.589 [2024-12-09 23:02:38.766292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.B2DJ6VSwgl 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:04.580 23:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:21:04.580 00:21:04.580 real 0m4.029s 00:21:04.580 user 0m4.705s 00:21:04.580 sys 0m0.551s 00:21:04.581 ************************************ 00:21:04.581 END TEST raid_write_error_test 00:21:04.581 ************************************ 00:21:04.581 23:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.581 23:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.581 23:02:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:04.581 23:02:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:21:04.581 23:02:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:04.581 23:02:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.581 23:02:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.581 ************************************ 00:21:04.581 START TEST raid_state_function_test 00:21:04.581 ************************************ 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:04.581 Process raid pid: 65756 00:21:04.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65756 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65756' 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65756 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65756 ']' 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:04.581 23:02:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.581 [2024-12-09 23:02:39.803812] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:04.581 [2024-12-09 23:02:39.803982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.842 [2024-12-09 23:02:39.971722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.842 [2024-12-09 23:02:40.117785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.104 [2024-12-09 23:02:40.291390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.104 [2024-12-09 23:02:40.291450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.365 [2024-12-09 23:02:40.679232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.365 [2024-12-09 23:02:40.679317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.365 [2024-12-09 23:02:40.679330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.365 [2024-12-09 23:02:40.679342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.365 [2024-12-09 23:02:40.679350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:05.365 [2024-12-09 23:02:40.679361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.365 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.366 "name": "Existed_Raid", 00:21:05.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.366 "strip_size_kb": 0, 00:21:05.366 "state": "configuring", 00:21:05.366 "raid_level": "raid1", 00:21:05.366 "superblock": false, 00:21:05.366 "num_base_bdevs": 3, 00:21:05.366 "num_base_bdevs_discovered": 0, 00:21:05.366 "num_base_bdevs_operational": 3, 00:21:05.366 "base_bdevs_list": [ 00:21:05.366 { 00:21:05.366 "name": "BaseBdev1", 00:21:05.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.366 "is_configured": false, 00:21:05.366 "data_offset": 0, 00:21:05.366 "data_size": 0 00:21:05.366 }, 00:21:05.366 { 00:21:05.366 "name": "BaseBdev2", 00:21:05.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.366 "is_configured": false, 00:21:05.366 "data_offset": 0, 00:21:05.366 "data_size": 0 00:21:05.366 }, 00:21:05.366 { 00:21:05.366 "name": "BaseBdev3", 00:21:05.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.366 "is_configured": false, 00:21:05.366 "data_offset": 0, 00:21:05.366 "data_size": 0 00:21:05.366 } 00:21:05.366 ] 00:21:05.366 }' 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.366 23:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.936 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:05.936 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.936 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.936 [2024-12-09 23:02:41.035258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:05.937 [2024-12-09 23:02:41.035312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.937 [2024-12-09 23:02:41.047249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.937 [2024-12-09 23:02:41.047314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.937 [2024-12-09 23:02:41.047326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.937 [2024-12-09 23:02:41.047338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.937 [2024-12-09 23:02:41.047345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:05.937 [2024-12-09 23:02:41.047356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.937 [2024-12-09 23:02:41.086075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.937 BaseBdev1 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.937 [ 00:21:05.937 { 00:21:05.937 "name": "BaseBdev1", 00:21:05.937 "aliases": [ 00:21:05.937 "1b6320fa-ffc5-414f-b3a3-e681a3d9858a" 00:21:05.937 ], 00:21:05.937 "product_name": "Malloc disk", 00:21:05.937 "block_size": 512, 00:21:05.937 "num_blocks": 65536, 00:21:05.937 "uuid": "1b6320fa-ffc5-414f-b3a3-e681a3d9858a", 00:21:05.937 "assigned_rate_limits": { 00:21:05.937 "rw_ios_per_sec": 0, 00:21:05.937 "rw_mbytes_per_sec": 0, 00:21:05.937 "r_mbytes_per_sec": 0, 00:21:05.937 "w_mbytes_per_sec": 0 00:21:05.937 }, 00:21:05.937 "claimed": true, 00:21:05.937 "claim_type": "exclusive_write", 00:21:05.937 "zoned": false, 00:21:05.937 "supported_io_types": { 00:21:05.937 "read": true, 00:21:05.937 "write": true, 00:21:05.937 "unmap": true, 00:21:05.937 "flush": true, 00:21:05.937 "reset": true, 00:21:05.937 "nvme_admin": false, 00:21:05.937 "nvme_io": false, 00:21:05.937 "nvme_io_md": false, 00:21:05.937 "write_zeroes": true, 00:21:05.937 "zcopy": true, 00:21:05.937 "get_zone_info": false, 00:21:05.937 "zone_management": false, 00:21:05.937 "zone_append": false, 00:21:05.937 "compare": false, 00:21:05.937 "compare_and_write": false, 00:21:05.937 "abort": true, 00:21:05.937 "seek_hole": false, 00:21:05.937 "seek_data": false, 00:21:05.937 "copy": true, 00:21:05.937 "nvme_iov_md": false 00:21:05.937 }, 00:21:05.937 "memory_domains": [ 00:21:05.937 { 00:21:05.937 "dma_device_id": "system", 00:21:05.937 "dma_device_type": 1 00:21:05.937 }, 00:21:05.937 { 00:21:05.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.937 "dma_device_type": 2 00:21:05.937 } 00:21:05.937 ], 00:21:05.937 "driver_specific": {} 00:21:05.937 } 00:21:05.937 ] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.937 "name": "Existed_Raid", 00:21:05.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.937 "strip_size_kb": 0, 00:21:05.937 "state": "configuring", 00:21:05.937 "raid_level": "raid1", 00:21:05.937 "superblock": false, 00:21:05.937 "num_base_bdevs": 3, 00:21:05.937 "num_base_bdevs_discovered": 1, 00:21:05.937 "num_base_bdevs_operational": 3, 00:21:05.937 "base_bdevs_list": [ 00:21:05.937 { 00:21:05.937 "name": "BaseBdev1", 00:21:05.937 "uuid": "1b6320fa-ffc5-414f-b3a3-e681a3d9858a", 00:21:05.937 "is_configured": true, 00:21:05.937 "data_offset": 0, 00:21:05.937 "data_size": 65536 00:21:05.937 }, 00:21:05.937 { 00:21:05.937 "name": "BaseBdev2", 00:21:05.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.937 "is_configured": false, 00:21:05.937 "data_offset": 0, 00:21:05.937 "data_size": 0 00:21:05.937 }, 00:21:05.937 { 00:21:05.937 "name": "BaseBdev3", 00:21:05.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.937 "is_configured": false, 00:21:05.937 "data_offset": 0, 00:21:05.937 "data_size": 0 00:21:05.937 } 00:21:05.937 ] 00:21:05.937 }' 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.937 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.200 [2024-12-09 23:02:41.470230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:06.200 [2024-12-09 23:02:41.470298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.200 [2024-12-09 23:02:41.478292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.200 [2024-12-09 23:02:41.480500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.200 [2024-12-09 23:02:41.480557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.200 [2024-12-09 23:02:41.480568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:06.200 [2024-12-09 23:02:41.480578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.200 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.200 "name": "Existed_Raid", 00:21:06.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.200 "strip_size_kb": 0, 00:21:06.200 "state": "configuring", 00:21:06.200 "raid_level": "raid1", 00:21:06.200 "superblock": false, 00:21:06.200 "num_base_bdevs": 3, 00:21:06.200 "num_base_bdevs_discovered": 1, 00:21:06.200 "num_base_bdevs_operational": 3, 00:21:06.200 "base_bdevs_list": [ 00:21:06.200 { 00:21:06.200 "name": "BaseBdev1", 00:21:06.200 "uuid": "1b6320fa-ffc5-414f-b3a3-e681a3d9858a", 00:21:06.200 "is_configured": true, 00:21:06.200 "data_offset": 0, 00:21:06.200 "data_size": 65536 00:21:06.200 }, 00:21:06.200 { 00:21:06.200 "name": "BaseBdev2", 00:21:06.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.200 "is_configured": false, 00:21:06.200 "data_offset": 0, 00:21:06.200 "data_size": 0 00:21:06.200 }, 00:21:06.200 { 00:21:06.200 "name": "BaseBdev3", 00:21:06.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.200 "is_configured": false, 00:21:06.200 "data_offset": 0, 00:21:06.200 "data_size": 0 00:21:06.201 } 00:21:06.201 ] 00:21:06.201 }' 00:21:06.201 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.201 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.461 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:06.461 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.461 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.757 [2024-12-09 23:02:41.830994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.757 BaseBdev2 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.757 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.757 [ 00:21:06.757 { 00:21:06.757 "name": "BaseBdev2", 00:21:06.757 "aliases": [ 00:21:06.757 "15f77505-79bf-4eaf-b05d-be2445a26be6" 00:21:06.757 ], 00:21:06.757 "product_name": "Malloc disk", 00:21:06.757 "block_size": 512, 00:21:06.757 "num_blocks": 65536, 00:21:06.758 "uuid": "15f77505-79bf-4eaf-b05d-be2445a26be6", 00:21:06.758 "assigned_rate_limits": { 00:21:06.758 "rw_ios_per_sec": 0, 00:21:06.758 "rw_mbytes_per_sec": 0, 00:21:06.758 "r_mbytes_per_sec": 0, 00:21:06.758 "w_mbytes_per_sec": 0 00:21:06.758 }, 00:21:06.758 "claimed": true, 00:21:06.758 "claim_type": "exclusive_write", 00:21:06.758 "zoned": false, 00:21:06.758 "supported_io_types": { 00:21:06.758 "read": true, 00:21:06.758 "write": true, 00:21:06.758 "unmap": true, 00:21:06.758 "flush": true, 00:21:06.758 "reset": true, 00:21:06.758 "nvme_admin": false, 00:21:06.758 "nvme_io": false, 00:21:06.758 "nvme_io_md": false, 00:21:06.758 "write_zeroes": true, 00:21:06.758 "zcopy": true, 00:21:06.758 "get_zone_info": false, 00:21:06.758 "zone_management": false, 00:21:06.758 "zone_append": false, 00:21:06.758 "compare": false, 00:21:06.758 "compare_and_write": false, 00:21:06.758 "abort": true, 00:21:06.758 "seek_hole": false, 00:21:06.758 "seek_data": false, 00:21:06.758 "copy": true, 00:21:06.758 "nvme_iov_md": false 00:21:06.758 }, 00:21:06.758 "memory_domains": [ 00:21:06.758 { 00:21:06.758 "dma_device_id": "system", 00:21:06.758 "dma_device_type": 1 00:21:06.758 }, 00:21:06.758 { 00:21:06.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.758 "dma_device_type": 2 00:21:06.758 } 00:21:06.758 ], 00:21:06.758 "driver_specific": {} 00:21:06.758 } 00:21:06.758 ] 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.758 "name": "Existed_Raid", 00:21:06.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.758 "strip_size_kb": 0, 00:21:06.758 "state": "configuring", 00:21:06.758 "raid_level": "raid1", 00:21:06.758 "superblock": false, 00:21:06.758 "num_base_bdevs": 3, 00:21:06.758 "num_base_bdevs_discovered": 2, 00:21:06.758 "num_base_bdevs_operational": 3, 00:21:06.758 "base_bdevs_list": [ 00:21:06.758 { 00:21:06.758 "name": "BaseBdev1", 00:21:06.758 "uuid": "1b6320fa-ffc5-414f-b3a3-e681a3d9858a", 00:21:06.758 "is_configured": true, 00:21:06.758 "data_offset": 0, 00:21:06.758 "data_size": 65536 00:21:06.758 }, 00:21:06.758 { 00:21:06.758 "name": "BaseBdev2", 00:21:06.758 "uuid": "15f77505-79bf-4eaf-b05d-be2445a26be6", 00:21:06.758 "is_configured": true, 00:21:06.758 "data_offset": 0, 00:21:06.758 "data_size": 65536 00:21:06.758 }, 00:21:06.758 { 00:21:06.758 "name": "BaseBdev3", 00:21:06.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.758 "is_configured": false, 00:21:06.758 "data_offset": 0, 00:21:06.758 "data_size": 0 00:21:06.758 } 00:21:06.758 ] 00:21:06.758 }' 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.758 23:02:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.020 [2024-12-09 23:02:42.239518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.020 [2024-12-09 23:02:42.239599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:07.020 [2024-12-09 23:02:42.239617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:07.020 [2024-12-09 23:02:42.239999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:07.020 [2024-12-09 23:02:42.240228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:07.020 [2024-12-09 23:02:42.240240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:07.020 [2024-12-09 23:02:42.240562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.020 BaseBdev3 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.020 [ 00:21:07.020 { 00:21:07.020 "name": "BaseBdev3", 00:21:07.020 "aliases": [ 00:21:07.020 "cbcf8852-cffd-4f58-bb70-4b682ccae175" 00:21:07.020 ], 00:21:07.020 "product_name": "Malloc disk", 00:21:07.020 "block_size": 512, 00:21:07.020 "num_blocks": 65536, 00:21:07.020 "uuid": "cbcf8852-cffd-4f58-bb70-4b682ccae175", 00:21:07.020 "assigned_rate_limits": { 00:21:07.020 "rw_ios_per_sec": 0, 00:21:07.020 "rw_mbytes_per_sec": 0, 00:21:07.020 "r_mbytes_per_sec": 0, 00:21:07.020 "w_mbytes_per_sec": 0 00:21:07.020 }, 00:21:07.020 "claimed": true, 00:21:07.020 "claim_type": "exclusive_write", 00:21:07.020 "zoned": false, 00:21:07.020 "supported_io_types": { 00:21:07.020 "read": true, 00:21:07.020 "write": true, 00:21:07.020 "unmap": true, 00:21:07.020 "flush": true, 00:21:07.020 "reset": true, 00:21:07.020 "nvme_admin": false, 00:21:07.020 "nvme_io": false, 00:21:07.020 "nvme_io_md": false, 00:21:07.020 "write_zeroes": true, 00:21:07.020 "zcopy": true, 00:21:07.020 "get_zone_info": false, 00:21:07.020 "zone_management": false, 00:21:07.020 "zone_append": false, 00:21:07.020 "compare": false, 00:21:07.020 "compare_and_write": false, 00:21:07.020 "abort": true, 00:21:07.020 "seek_hole": false, 00:21:07.020 "seek_data": false, 00:21:07.020 "copy": true, 00:21:07.020 "nvme_iov_md": false 00:21:07.020 }, 00:21:07.020 "memory_domains": [ 00:21:07.020 { 00:21:07.020 "dma_device_id": "system", 00:21:07.020 "dma_device_type": 1 00:21:07.020 }, 00:21:07.020 { 00:21:07.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.020 "dma_device_type": 2 00:21:07.020 } 00:21:07.020 ], 00:21:07.020 "driver_specific": {} 00:21:07.020 } 00:21:07.020 ] 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.020 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.021 "name": "Existed_Raid", 00:21:07.021 "uuid": "65fb7075-971b-4c7c-857a-7271dac31f74", 00:21:07.021 "strip_size_kb": 0, 00:21:07.021 "state": "online", 00:21:07.021 "raid_level": "raid1", 00:21:07.021 "superblock": false, 00:21:07.021 "num_base_bdevs": 3, 00:21:07.021 "num_base_bdevs_discovered": 3, 00:21:07.021 "num_base_bdevs_operational": 3, 00:21:07.021 "base_bdevs_list": [ 00:21:07.021 { 00:21:07.021 "name": "BaseBdev1", 00:21:07.021 "uuid": "1b6320fa-ffc5-414f-b3a3-e681a3d9858a", 00:21:07.021 "is_configured": true, 00:21:07.021 "data_offset": 0, 00:21:07.021 "data_size": 65536 00:21:07.021 }, 00:21:07.021 { 00:21:07.021 "name": "BaseBdev2", 00:21:07.021 "uuid": "15f77505-79bf-4eaf-b05d-be2445a26be6", 00:21:07.021 "is_configured": true, 00:21:07.021 "data_offset": 0, 00:21:07.021 "data_size": 65536 00:21:07.021 }, 00:21:07.021 { 00:21:07.021 "name": "BaseBdev3", 00:21:07.021 "uuid": "cbcf8852-cffd-4f58-bb70-4b682ccae175", 00:21:07.021 "is_configured": true, 00:21:07.021 "data_offset": 0, 00:21:07.021 "data_size": 65536 00:21:07.021 } 00:21:07.021 ] 00:21:07.021 }' 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.021 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.283 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.283 [2024-12-09 23:02:42.636036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.544 "name": "Existed_Raid", 00:21:07.544 "aliases": [ 00:21:07.544 "65fb7075-971b-4c7c-857a-7271dac31f74" 00:21:07.544 ], 00:21:07.544 "product_name": "Raid Volume", 00:21:07.544 "block_size": 512, 00:21:07.544 "num_blocks": 65536, 00:21:07.544 "uuid": "65fb7075-971b-4c7c-857a-7271dac31f74", 00:21:07.544 "assigned_rate_limits": { 00:21:07.544 "rw_ios_per_sec": 0, 00:21:07.544 "rw_mbytes_per_sec": 0, 00:21:07.544 "r_mbytes_per_sec": 0, 00:21:07.544 "w_mbytes_per_sec": 0 00:21:07.544 }, 00:21:07.544 "claimed": false, 00:21:07.544 "zoned": false, 00:21:07.544 "supported_io_types": { 00:21:07.544 "read": true, 00:21:07.544 "write": true, 00:21:07.544 "unmap": false, 00:21:07.544 "flush": false, 00:21:07.544 "reset": true, 00:21:07.544 "nvme_admin": false, 00:21:07.544 "nvme_io": false, 00:21:07.544 "nvme_io_md": false, 00:21:07.544 "write_zeroes": true, 00:21:07.544 "zcopy": false, 00:21:07.544 "get_zone_info": false, 00:21:07.544 "zone_management": false, 00:21:07.544 "zone_append": false, 00:21:07.544 "compare": false, 00:21:07.544 "compare_and_write": false, 00:21:07.544 "abort": false, 00:21:07.544 "seek_hole": false, 00:21:07.544 "seek_data": false, 00:21:07.544 "copy": false, 00:21:07.544 "nvme_iov_md": false 00:21:07.544 }, 00:21:07.544 "memory_domains": [ 00:21:07.544 { 00:21:07.544 "dma_device_id": "system", 00:21:07.544 "dma_device_type": 1 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.544 "dma_device_type": 2 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "dma_device_id": "system", 00:21:07.544 "dma_device_type": 1 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.544 "dma_device_type": 2 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "dma_device_id": "system", 00:21:07.544 "dma_device_type": 1 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.544 "dma_device_type": 2 00:21:07.544 } 00:21:07.544 ], 00:21:07.544 "driver_specific": { 00:21:07.544 "raid": { 00:21:07.544 "uuid": "65fb7075-971b-4c7c-857a-7271dac31f74", 00:21:07.544 "strip_size_kb": 0, 00:21:07.544 "state": "online", 00:21:07.544 "raid_level": "raid1", 00:21:07.544 "superblock": false, 00:21:07.544 "num_base_bdevs": 3, 00:21:07.544 "num_base_bdevs_discovered": 3, 00:21:07.544 "num_base_bdevs_operational": 3, 00:21:07.544 "base_bdevs_list": [ 00:21:07.544 { 00:21:07.544 "name": "BaseBdev1", 00:21:07.544 "uuid": "1b6320fa-ffc5-414f-b3a3-e681a3d9858a", 00:21:07.544 "is_configured": true, 00:21:07.544 "data_offset": 0, 00:21:07.544 "data_size": 65536 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "name": "BaseBdev2", 00:21:07.544 "uuid": "15f77505-79bf-4eaf-b05d-be2445a26be6", 00:21:07.544 "is_configured": true, 00:21:07.544 "data_offset": 0, 00:21:07.544 "data_size": 65536 00:21:07.544 }, 00:21:07.544 { 00:21:07.544 "name": "BaseBdev3", 00:21:07.544 "uuid": "cbcf8852-cffd-4f58-bb70-4b682ccae175", 00:21:07.544 "is_configured": true, 00:21:07.544 "data_offset": 0, 00:21:07.544 "data_size": 65536 00:21:07.544 } 00:21:07.544 ] 00:21:07.544 } 00:21:07.544 } 00:21:07.544 }' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:07.544 BaseBdev2 00:21:07.544 BaseBdev3' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:07.544 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.545 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.545 [2024-12-09 23:02:42.835808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:07.804 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.804 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:07.804 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:07.804 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.805 "name": "Existed_Raid", 00:21:07.805 "uuid": "65fb7075-971b-4c7c-857a-7271dac31f74", 00:21:07.805 "strip_size_kb": 0, 00:21:07.805 "state": "online", 00:21:07.805 "raid_level": "raid1", 00:21:07.805 "superblock": false, 00:21:07.805 "num_base_bdevs": 3, 00:21:07.805 "num_base_bdevs_discovered": 2, 00:21:07.805 "num_base_bdevs_operational": 2, 00:21:07.805 "base_bdevs_list": [ 00:21:07.805 { 00:21:07.805 "name": null, 00:21:07.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.805 "is_configured": false, 00:21:07.805 "data_offset": 0, 00:21:07.805 "data_size": 65536 00:21:07.805 }, 00:21:07.805 { 00:21:07.805 "name": "BaseBdev2", 00:21:07.805 "uuid": "15f77505-79bf-4eaf-b05d-be2445a26be6", 00:21:07.805 "is_configured": true, 00:21:07.805 "data_offset": 0, 00:21:07.805 "data_size": 65536 00:21:07.805 }, 00:21:07.805 { 00:21:07.805 "name": "BaseBdev3", 00:21:07.805 "uuid": "cbcf8852-cffd-4f58-bb70-4b682ccae175", 00:21:07.805 "is_configured": true, 00:21:07.805 "data_offset": 0, 00:21:07.805 "data_size": 65536 00:21:07.805 } 00:21:07.805 ] 00:21:07.805 }' 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.805 23:02:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.065 [2024-12-09 23:02:43.271046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.065 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.065 [2024-12-09 23:02:43.374523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:08.065 [2024-12-09 23:02:43.374847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.327 [2024-12-09 23:02:43.445843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.327 [2024-12-09 23:02:43.446083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.327 [2024-12-09 23:02:43.446143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.327 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.327 BaseBdev2 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 [ 00:21:08.328 { 00:21:08.328 "name": "BaseBdev2", 00:21:08.328 "aliases": [ 00:21:08.328 "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5" 00:21:08.328 ], 00:21:08.328 "product_name": "Malloc disk", 00:21:08.328 "block_size": 512, 00:21:08.328 "num_blocks": 65536, 00:21:08.328 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:08.328 "assigned_rate_limits": { 00:21:08.328 "rw_ios_per_sec": 0, 00:21:08.328 "rw_mbytes_per_sec": 0, 00:21:08.328 "r_mbytes_per_sec": 0, 00:21:08.328 "w_mbytes_per_sec": 0 00:21:08.328 }, 00:21:08.328 "claimed": false, 00:21:08.328 "zoned": false, 00:21:08.328 "supported_io_types": { 00:21:08.328 "read": true, 00:21:08.328 "write": true, 00:21:08.328 "unmap": true, 00:21:08.328 "flush": true, 00:21:08.328 "reset": true, 00:21:08.328 "nvme_admin": false, 00:21:08.328 "nvme_io": false, 00:21:08.328 "nvme_io_md": false, 00:21:08.328 "write_zeroes": true, 00:21:08.328 "zcopy": true, 00:21:08.328 "get_zone_info": false, 00:21:08.328 "zone_management": false, 00:21:08.328 "zone_append": false, 00:21:08.328 "compare": false, 00:21:08.328 "compare_and_write": false, 00:21:08.328 "abort": true, 00:21:08.328 "seek_hole": false, 00:21:08.328 "seek_data": false, 00:21:08.328 "copy": true, 00:21:08.328 "nvme_iov_md": false 00:21:08.328 }, 00:21:08.328 "memory_domains": [ 00:21:08.328 { 00:21:08.328 "dma_device_id": "system", 00:21:08.328 "dma_device_type": 1 00:21:08.328 }, 00:21:08.328 { 00:21:08.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.328 "dma_device_type": 2 00:21:08.328 } 00:21:08.328 ], 00:21:08.328 "driver_specific": {} 00:21:08.328 } 00:21:08.328 ] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 BaseBdev3 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 [ 00:21:08.328 { 00:21:08.328 "name": "BaseBdev3", 00:21:08.328 "aliases": [ 00:21:08.328 "fa821abd-a35c-43d2-bedf-d81953171962" 00:21:08.328 ], 00:21:08.328 "product_name": "Malloc disk", 00:21:08.328 "block_size": 512, 00:21:08.328 "num_blocks": 65536, 00:21:08.328 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:08.328 "assigned_rate_limits": { 00:21:08.328 "rw_ios_per_sec": 0, 00:21:08.328 "rw_mbytes_per_sec": 0, 00:21:08.328 "r_mbytes_per_sec": 0, 00:21:08.328 "w_mbytes_per_sec": 0 00:21:08.328 }, 00:21:08.328 "claimed": false, 00:21:08.328 "zoned": false, 00:21:08.328 "supported_io_types": { 00:21:08.328 "read": true, 00:21:08.328 "write": true, 00:21:08.328 "unmap": true, 00:21:08.328 "flush": true, 00:21:08.328 "reset": true, 00:21:08.328 "nvme_admin": false, 00:21:08.328 "nvme_io": false, 00:21:08.328 "nvme_io_md": false, 00:21:08.328 "write_zeroes": true, 00:21:08.328 "zcopy": true, 00:21:08.328 "get_zone_info": false, 00:21:08.328 "zone_management": false, 00:21:08.328 "zone_append": false, 00:21:08.328 "compare": false, 00:21:08.328 "compare_and_write": false, 00:21:08.328 "abort": true, 00:21:08.328 "seek_hole": false, 00:21:08.328 "seek_data": false, 00:21:08.328 "copy": true, 00:21:08.328 "nvme_iov_md": false 00:21:08.328 }, 00:21:08.328 "memory_domains": [ 00:21:08.328 { 00:21:08.328 "dma_device_id": "system", 00:21:08.328 "dma_device_type": 1 00:21:08.328 }, 00:21:08.328 { 00:21:08.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.328 "dma_device_type": 2 00:21:08.328 } 00:21:08.328 ], 00:21:08.328 "driver_specific": {} 00:21:08.328 } 00:21:08.328 ] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 [2024-12-09 23:02:43.612821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:08.328 [2024-12-09 23:02:43.613057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:08.328 [2024-12-09 23:02:43.613175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:08.328 [2024-12-09 23:02:43.615390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.328 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.328 "name": "Existed_Raid", 00:21:08.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.328 "strip_size_kb": 0, 00:21:08.328 "state": "configuring", 00:21:08.328 "raid_level": "raid1", 00:21:08.328 "superblock": false, 00:21:08.328 "num_base_bdevs": 3, 00:21:08.328 "num_base_bdevs_discovered": 2, 00:21:08.328 "num_base_bdevs_operational": 3, 00:21:08.328 "base_bdevs_list": [ 00:21:08.328 { 00:21:08.328 "name": "BaseBdev1", 00:21:08.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.328 "is_configured": false, 00:21:08.328 "data_offset": 0, 00:21:08.328 "data_size": 0 00:21:08.328 }, 00:21:08.328 { 00:21:08.328 "name": "BaseBdev2", 00:21:08.328 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:08.328 "is_configured": true, 00:21:08.328 "data_offset": 0, 00:21:08.328 "data_size": 65536 00:21:08.329 }, 00:21:08.329 { 00:21:08.329 "name": "BaseBdev3", 00:21:08.329 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:08.329 "is_configured": true, 00:21:08.329 "data_offset": 0, 00:21:08.329 "data_size": 65536 00:21:08.329 } 00:21:08.329 ] 00:21:08.329 }' 00:21:08.329 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.329 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.590 [2024-12-09 23:02:43.940953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.590 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.857 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.857 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.857 "name": "Existed_Raid", 00:21:08.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.857 "strip_size_kb": 0, 00:21:08.857 "state": "configuring", 00:21:08.857 "raid_level": "raid1", 00:21:08.857 "superblock": false, 00:21:08.857 "num_base_bdevs": 3, 00:21:08.857 "num_base_bdevs_discovered": 1, 00:21:08.857 "num_base_bdevs_operational": 3, 00:21:08.857 "base_bdevs_list": [ 00:21:08.857 { 00:21:08.857 "name": "BaseBdev1", 00:21:08.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.857 "is_configured": false, 00:21:08.857 "data_offset": 0, 00:21:08.857 "data_size": 0 00:21:08.857 }, 00:21:08.857 { 00:21:08.857 "name": null, 00:21:08.857 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:08.857 "is_configured": false, 00:21:08.857 "data_offset": 0, 00:21:08.857 "data_size": 65536 00:21:08.857 }, 00:21:08.857 { 00:21:08.857 "name": "BaseBdev3", 00:21:08.857 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:08.857 "is_configured": true, 00:21:08.857 "data_offset": 0, 00:21:08.857 "data_size": 65536 00:21:08.857 } 00:21:08.857 ] 00:21:08.857 }' 00:21:08.857 23:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.857 23:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.121 [2024-12-09 23:02:44.329555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.121 BaseBdev1 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.121 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.121 [ 00:21:09.121 { 00:21:09.121 "name": "BaseBdev1", 00:21:09.121 "aliases": [ 00:21:09.121 "cee176f6-d860-4083-9e71-1093845d424a" 00:21:09.121 ], 00:21:09.121 "product_name": "Malloc disk", 00:21:09.121 "block_size": 512, 00:21:09.121 "num_blocks": 65536, 00:21:09.121 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:09.122 "assigned_rate_limits": { 00:21:09.122 "rw_ios_per_sec": 0, 00:21:09.122 "rw_mbytes_per_sec": 0, 00:21:09.122 "r_mbytes_per_sec": 0, 00:21:09.122 "w_mbytes_per_sec": 0 00:21:09.122 }, 00:21:09.122 "claimed": true, 00:21:09.122 "claim_type": "exclusive_write", 00:21:09.122 "zoned": false, 00:21:09.122 "supported_io_types": { 00:21:09.122 "read": true, 00:21:09.122 "write": true, 00:21:09.122 "unmap": true, 00:21:09.122 "flush": true, 00:21:09.122 "reset": true, 00:21:09.122 "nvme_admin": false, 00:21:09.122 "nvme_io": false, 00:21:09.122 "nvme_io_md": false, 00:21:09.122 "write_zeroes": true, 00:21:09.122 "zcopy": true, 00:21:09.122 "get_zone_info": false, 00:21:09.122 "zone_management": false, 00:21:09.122 "zone_append": false, 00:21:09.122 "compare": false, 00:21:09.122 "compare_and_write": false, 00:21:09.122 "abort": true, 00:21:09.122 "seek_hole": false, 00:21:09.122 "seek_data": false, 00:21:09.122 "copy": true, 00:21:09.122 "nvme_iov_md": false 00:21:09.122 }, 00:21:09.122 "memory_domains": [ 00:21:09.122 { 00:21:09.122 "dma_device_id": "system", 00:21:09.122 "dma_device_type": 1 00:21:09.122 }, 00:21:09.122 { 00:21:09.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.122 "dma_device_type": 2 00:21:09.122 } 00:21:09.122 ], 00:21:09.122 "driver_specific": {} 00:21:09.122 } 00:21:09.122 ] 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.122 "name": "Existed_Raid", 00:21:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.122 "strip_size_kb": 0, 00:21:09.122 "state": "configuring", 00:21:09.122 "raid_level": "raid1", 00:21:09.122 "superblock": false, 00:21:09.122 "num_base_bdevs": 3, 00:21:09.122 "num_base_bdevs_discovered": 2, 00:21:09.122 "num_base_bdevs_operational": 3, 00:21:09.122 "base_bdevs_list": [ 00:21:09.122 { 00:21:09.122 "name": "BaseBdev1", 00:21:09.122 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:09.122 "is_configured": true, 00:21:09.122 "data_offset": 0, 00:21:09.122 "data_size": 65536 00:21:09.122 }, 00:21:09.122 { 00:21:09.122 "name": null, 00:21:09.122 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:09.122 "is_configured": false, 00:21:09.122 "data_offset": 0, 00:21:09.122 "data_size": 65536 00:21:09.122 }, 00:21:09.122 { 00:21:09.122 "name": "BaseBdev3", 00:21:09.122 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:09.122 "is_configured": true, 00:21:09.122 "data_offset": 0, 00:21:09.122 "data_size": 65536 00:21:09.122 } 00:21:09.122 ] 00:21:09.122 }' 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.122 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.383 [2024-12-09 23:02:44.705708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.383 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.384 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.644 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.644 "name": "Existed_Raid", 00:21:09.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.644 "strip_size_kb": 0, 00:21:09.644 "state": "configuring", 00:21:09.644 "raid_level": "raid1", 00:21:09.645 "superblock": false, 00:21:09.645 "num_base_bdevs": 3, 00:21:09.645 "num_base_bdevs_discovered": 1, 00:21:09.645 "num_base_bdevs_operational": 3, 00:21:09.645 "base_bdevs_list": [ 00:21:09.645 { 00:21:09.645 "name": "BaseBdev1", 00:21:09.645 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:09.645 "is_configured": true, 00:21:09.645 "data_offset": 0, 00:21:09.645 "data_size": 65536 00:21:09.645 }, 00:21:09.645 { 00:21:09.645 "name": null, 00:21:09.645 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:09.645 "is_configured": false, 00:21:09.645 "data_offset": 0, 00:21:09.645 "data_size": 65536 00:21:09.645 }, 00:21:09.645 { 00:21:09.645 "name": null, 00:21:09.645 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:09.645 "is_configured": false, 00:21:09.645 "data_offset": 0, 00:21:09.645 "data_size": 65536 00:21:09.645 } 00:21:09.645 ] 00:21:09.645 }' 00:21:09.645 23:02:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.645 23:02:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.912 [2024-12-09 23:02:45.085905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.912 "name": "Existed_Raid", 00:21:09.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.912 "strip_size_kb": 0, 00:21:09.912 "state": "configuring", 00:21:09.912 "raid_level": "raid1", 00:21:09.912 "superblock": false, 00:21:09.912 "num_base_bdevs": 3, 00:21:09.912 "num_base_bdevs_discovered": 2, 00:21:09.912 "num_base_bdevs_operational": 3, 00:21:09.912 "base_bdevs_list": [ 00:21:09.912 { 00:21:09.912 "name": "BaseBdev1", 00:21:09.912 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:09.912 "is_configured": true, 00:21:09.912 "data_offset": 0, 00:21:09.912 "data_size": 65536 00:21:09.912 }, 00:21:09.912 { 00:21:09.912 "name": null, 00:21:09.912 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:09.912 "is_configured": false, 00:21:09.912 "data_offset": 0, 00:21:09.912 "data_size": 65536 00:21:09.912 }, 00:21:09.912 { 00:21:09.912 "name": "BaseBdev3", 00:21:09.912 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:09.912 "is_configured": true, 00:21:09.912 "data_offset": 0, 00:21:09.912 "data_size": 65536 00:21:09.912 } 00:21:09.912 ] 00:21:09.912 }' 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.912 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.176 [2024-12-09 23:02:45.441960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.176 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.438 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.438 "name": "Existed_Raid", 00:21:10.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.438 "strip_size_kb": 0, 00:21:10.438 "state": "configuring", 00:21:10.438 "raid_level": "raid1", 00:21:10.438 "superblock": false, 00:21:10.438 "num_base_bdevs": 3, 00:21:10.438 "num_base_bdevs_discovered": 1, 00:21:10.438 "num_base_bdevs_operational": 3, 00:21:10.438 "base_bdevs_list": [ 00:21:10.438 { 00:21:10.438 "name": null, 00:21:10.438 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:10.438 "is_configured": false, 00:21:10.438 "data_offset": 0, 00:21:10.438 "data_size": 65536 00:21:10.438 }, 00:21:10.438 { 00:21:10.438 "name": null, 00:21:10.438 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:10.438 "is_configured": false, 00:21:10.438 "data_offset": 0, 00:21:10.438 "data_size": 65536 00:21:10.438 }, 00:21:10.438 { 00:21:10.438 "name": "BaseBdev3", 00:21:10.438 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:10.438 "is_configured": true, 00:21:10.438 "data_offset": 0, 00:21:10.438 "data_size": 65536 00:21:10.438 } 00:21:10.438 ] 00:21:10.438 }' 00:21:10.438 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.438 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.699 [2024-12-09 23:02:45.901593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.699 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.699 "name": "Existed_Raid", 00:21:10.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.699 "strip_size_kb": 0, 00:21:10.699 "state": "configuring", 00:21:10.699 "raid_level": "raid1", 00:21:10.699 "superblock": false, 00:21:10.699 "num_base_bdevs": 3, 00:21:10.699 "num_base_bdevs_discovered": 2, 00:21:10.699 "num_base_bdevs_operational": 3, 00:21:10.699 "base_bdevs_list": [ 00:21:10.699 { 00:21:10.699 "name": null, 00:21:10.699 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:10.699 "is_configured": false, 00:21:10.699 "data_offset": 0, 00:21:10.699 "data_size": 65536 00:21:10.699 }, 00:21:10.699 { 00:21:10.699 "name": "BaseBdev2", 00:21:10.700 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:10.700 "is_configured": true, 00:21:10.700 "data_offset": 0, 00:21:10.700 "data_size": 65536 00:21:10.700 }, 00:21:10.700 { 00:21:10.700 "name": "BaseBdev3", 00:21:10.700 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:10.700 "is_configured": true, 00:21:10.700 "data_offset": 0, 00:21:10.700 "data_size": 65536 00:21:10.700 } 00:21:10.700 ] 00:21:10.700 }' 00:21:10.700 23:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.700 23:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cee176f6-d860-4083-9e71-1093845d424a 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.037 [2024-12-09 23:02:46.333980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:11.037 [2024-12-09 23:02:46.334082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:11.037 [2024-12-09 23:02:46.334096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:11.037 [2024-12-09 23:02:46.334558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:11.037 [2024-12-09 23:02:46.334800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:11.037 [2024-12-09 23:02:46.334844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:11.037 [2024-12-09 23:02:46.335266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.037 NewBaseBdev 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.037 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.037 [ 00:21:11.037 { 00:21:11.037 "name": "NewBaseBdev", 00:21:11.037 "aliases": [ 00:21:11.037 "cee176f6-d860-4083-9e71-1093845d424a" 00:21:11.037 ], 00:21:11.037 "product_name": "Malloc disk", 00:21:11.037 "block_size": 512, 00:21:11.037 "num_blocks": 65536, 00:21:11.037 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:11.037 "assigned_rate_limits": { 00:21:11.037 "rw_ios_per_sec": 0, 00:21:11.037 "rw_mbytes_per_sec": 0, 00:21:11.037 "r_mbytes_per_sec": 0, 00:21:11.037 "w_mbytes_per_sec": 0 00:21:11.037 }, 00:21:11.037 "claimed": true, 00:21:11.037 "claim_type": "exclusive_write", 00:21:11.037 "zoned": false, 00:21:11.037 "supported_io_types": { 00:21:11.037 "read": true, 00:21:11.037 "write": true, 00:21:11.037 "unmap": true, 00:21:11.037 "flush": true, 00:21:11.037 "reset": true, 00:21:11.037 "nvme_admin": false, 00:21:11.037 "nvme_io": false, 00:21:11.037 "nvme_io_md": false, 00:21:11.037 "write_zeroes": true, 00:21:11.037 "zcopy": true, 00:21:11.037 "get_zone_info": false, 00:21:11.037 "zone_management": false, 00:21:11.037 "zone_append": false, 00:21:11.037 "compare": false, 00:21:11.037 "compare_and_write": false, 00:21:11.037 "abort": true, 00:21:11.037 "seek_hole": false, 00:21:11.037 "seek_data": false, 00:21:11.037 "copy": true, 00:21:11.037 "nvme_iov_md": false 00:21:11.037 }, 00:21:11.037 "memory_domains": [ 00:21:11.037 { 00:21:11.037 "dma_device_id": "system", 00:21:11.037 "dma_device_type": 1 00:21:11.037 }, 00:21:11.038 { 00:21:11.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.038 "dma_device_type": 2 00:21:11.038 } 00:21:11.038 ], 00:21:11.038 "driver_specific": {} 00:21:11.038 } 00:21:11.038 ] 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.038 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.298 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.298 "name": "Existed_Raid", 00:21:11.298 "uuid": "7f7f4291-85e4-466d-a6a5-410e88149787", 00:21:11.298 "strip_size_kb": 0, 00:21:11.298 "state": "online", 00:21:11.298 "raid_level": "raid1", 00:21:11.298 "superblock": false, 00:21:11.298 "num_base_bdevs": 3, 00:21:11.298 "num_base_bdevs_discovered": 3, 00:21:11.298 "num_base_bdevs_operational": 3, 00:21:11.298 "base_bdevs_list": [ 00:21:11.298 { 00:21:11.298 "name": "NewBaseBdev", 00:21:11.298 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:11.298 "is_configured": true, 00:21:11.298 "data_offset": 0, 00:21:11.298 "data_size": 65536 00:21:11.298 }, 00:21:11.298 { 00:21:11.298 "name": "BaseBdev2", 00:21:11.298 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:11.298 "is_configured": true, 00:21:11.298 "data_offset": 0, 00:21:11.298 "data_size": 65536 00:21:11.298 }, 00:21:11.298 { 00:21:11.298 "name": "BaseBdev3", 00:21:11.298 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:11.298 "is_configured": true, 00:21:11.298 "data_offset": 0, 00:21:11.298 "data_size": 65536 00:21:11.298 } 00:21:11.298 ] 00:21:11.298 }' 00:21:11.298 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.298 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:11.562 [2024-12-09 23:02:46.702558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.562 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:11.562 "name": "Existed_Raid", 00:21:11.562 "aliases": [ 00:21:11.562 "7f7f4291-85e4-466d-a6a5-410e88149787" 00:21:11.562 ], 00:21:11.562 "product_name": "Raid Volume", 00:21:11.562 "block_size": 512, 00:21:11.562 "num_blocks": 65536, 00:21:11.562 "uuid": "7f7f4291-85e4-466d-a6a5-410e88149787", 00:21:11.562 "assigned_rate_limits": { 00:21:11.562 "rw_ios_per_sec": 0, 00:21:11.562 "rw_mbytes_per_sec": 0, 00:21:11.562 "r_mbytes_per_sec": 0, 00:21:11.562 "w_mbytes_per_sec": 0 00:21:11.562 }, 00:21:11.562 "claimed": false, 00:21:11.562 "zoned": false, 00:21:11.562 "supported_io_types": { 00:21:11.562 "read": true, 00:21:11.562 "write": true, 00:21:11.562 "unmap": false, 00:21:11.562 "flush": false, 00:21:11.562 "reset": true, 00:21:11.562 "nvme_admin": false, 00:21:11.562 "nvme_io": false, 00:21:11.562 "nvme_io_md": false, 00:21:11.562 "write_zeroes": true, 00:21:11.562 "zcopy": false, 00:21:11.562 "get_zone_info": false, 00:21:11.562 "zone_management": false, 00:21:11.562 "zone_append": false, 00:21:11.562 "compare": false, 00:21:11.562 "compare_and_write": false, 00:21:11.562 "abort": false, 00:21:11.562 "seek_hole": false, 00:21:11.562 "seek_data": false, 00:21:11.562 "copy": false, 00:21:11.562 "nvme_iov_md": false 00:21:11.562 }, 00:21:11.562 "memory_domains": [ 00:21:11.562 { 00:21:11.562 "dma_device_id": "system", 00:21:11.562 "dma_device_type": 1 00:21:11.562 }, 00:21:11.562 { 00:21:11.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.562 "dma_device_type": 2 00:21:11.562 }, 00:21:11.562 { 00:21:11.562 "dma_device_id": "system", 00:21:11.562 "dma_device_type": 1 00:21:11.562 }, 00:21:11.562 { 00:21:11.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.562 "dma_device_type": 2 00:21:11.562 }, 00:21:11.562 { 00:21:11.562 "dma_device_id": "system", 00:21:11.562 "dma_device_type": 1 00:21:11.562 }, 00:21:11.562 { 00:21:11.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.562 "dma_device_type": 2 00:21:11.562 } 00:21:11.562 ], 00:21:11.562 "driver_specific": { 00:21:11.562 "raid": { 00:21:11.562 "uuid": "7f7f4291-85e4-466d-a6a5-410e88149787", 00:21:11.562 "strip_size_kb": 0, 00:21:11.563 "state": "online", 00:21:11.563 "raid_level": "raid1", 00:21:11.563 "superblock": false, 00:21:11.563 "num_base_bdevs": 3, 00:21:11.563 "num_base_bdevs_discovered": 3, 00:21:11.563 "num_base_bdevs_operational": 3, 00:21:11.563 "base_bdevs_list": [ 00:21:11.563 { 00:21:11.563 "name": "NewBaseBdev", 00:21:11.563 "uuid": "cee176f6-d860-4083-9e71-1093845d424a", 00:21:11.563 "is_configured": true, 00:21:11.563 "data_offset": 0, 00:21:11.563 "data_size": 65536 00:21:11.563 }, 00:21:11.563 { 00:21:11.563 "name": "BaseBdev2", 00:21:11.563 "uuid": "d4bcd2a4-34ff-4ef7-8612-e784bb2d55d5", 00:21:11.563 "is_configured": true, 00:21:11.563 "data_offset": 0, 00:21:11.563 "data_size": 65536 00:21:11.563 }, 00:21:11.563 { 00:21:11.563 "name": "BaseBdev3", 00:21:11.563 "uuid": "fa821abd-a35c-43d2-bedf-d81953171962", 00:21:11.563 "is_configured": true, 00:21:11.563 "data_offset": 0, 00:21:11.563 "data_size": 65536 00:21:11.563 } 00:21:11.563 ] 00:21:11.563 } 00:21:11.563 } 00:21:11.563 }' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:11.563 BaseBdev2 00:21:11.563 BaseBdev3' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 [2024-12-09 23:02:46.894173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:11.563 [2024-12-09 23:02:46.894218] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.563 [2024-12-09 23:02:46.894319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.563 [2024-12-09 23:02:46.894650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.563 [2024-12-09 23:02:46.894661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65756 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65756 ']' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65756 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.563 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65756 00:21:11.825 killing process with pid 65756 00:21:11.825 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.825 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.825 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65756' 00:21:11.825 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65756 00:21:11.826 [2024-12-09 23:02:46.927921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.826 23:02:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65756 00:21:11.826 [2024-12-09 23:02:47.145094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:12.770 00:21:12.770 real 0m8.312s 00:21:12.770 user 0m12.787s 00:21:12.770 sys 0m1.497s 00:21:12.770 ************************************ 00:21:12.770 END TEST raid_state_function_test 00:21:12.770 ************************************ 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 23:02:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:21:12.770 23:02:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:12.770 23:02:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.770 23:02:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 ************************************ 00:21:12.770 START TEST raid_state_function_test_sb 00:21:12.770 ************************************ 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:12.770 Process raid pid: 66355 00:21:12.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66355 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66355' 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66355 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66355 ']' 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 23:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:13.032 [2024-12-09 23:02:48.190295] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:13.032 [2024-12-09 23:02:48.190688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.032 [2024-12-09 23:02:48.355401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.356 [2024-12-09 23:02:48.495223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.356 [2024-12-09 23:02:48.663638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.356 [2024-12-09 23:02:48.663917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.929 [2024-12-09 23:02:49.078144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:13.929 [2024-12-09 23:02:49.078391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:13.929 [2024-12-09 23:02:49.078469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.929 [2024-12-09 23:02:49.078500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.929 [2024-12-09 23:02:49.078520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:13.929 [2024-12-09 23:02:49.078542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.929 "name": "Existed_Raid", 00:21:13.929 "uuid": "7a4f79ef-c06b-4aa3-a2e8-f68c836e242c", 00:21:13.929 "strip_size_kb": 0, 00:21:13.929 "state": "configuring", 00:21:13.929 "raid_level": "raid1", 00:21:13.929 "superblock": true, 00:21:13.929 "num_base_bdevs": 3, 00:21:13.929 "num_base_bdevs_discovered": 0, 00:21:13.929 "num_base_bdevs_operational": 3, 00:21:13.929 "base_bdevs_list": [ 00:21:13.929 { 00:21:13.929 "name": "BaseBdev1", 00:21:13.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.929 "is_configured": false, 00:21:13.929 "data_offset": 0, 00:21:13.929 "data_size": 0 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "name": "BaseBdev2", 00:21:13.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.929 "is_configured": false, 00:21:13.929 "data_offset": 0, 00:21:13.929 "data_size": 0 00:21:13.929 }, 00:21:13.929 { 00:21:13.929 "name": "BaseBdev3", 00:21:13.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.929 "is_configured": false, 00:21:13.929 "data_offset": 0, 00:21:13.929 "data_size": 0 00:21:13.929 } 00:21:13.929 ] 00:21:13.929 }' 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.929 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 [2024-12-09 23:02:49.422134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:14.189 [2024-12-09 23:02:49.422176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 [2024-12-09 23:02:49.430130] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:14.189 [2024-12-09 23:02:49.430186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:14.189 [2024-12-09 23:02:49.430196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:14.189 [2024-12-09 23:02:49.430206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:14.189 [2024-12-09 23:02:49.430212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:14.189 [2024-12-09 23:02:49.430222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 [2024-12-09 23:02:49.467564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.189 BaseBdev1 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 [ 00:21:14.189 { 00:21:14.189 "name": "BaseBdev1", 00:21:14.189 "aliases": [ 00:21:14.189 "2cc38032-cbad-460e-93ef-ebcc6080fea6" 00:21:14.189 ], 00:21:14.189 "product_name": "Malloc disk", 00:21:14.189 "block_size": 512, 00:21:14.189 "num_blocks": 65536, 00:21:14.189 "uuid": "2cc38032-cbad-460e-93ef-ebcc6080fea6", 00:21:14.189 "assigned_rate_limits": { 00:21:14.189 "rw_ios_per_sec": 0, 00:21:14.189 "rw_mbytes_per_sec": 0, 00:21:14.189 "r_mbytes_per_sec": 0, 00:21:14.189 "w_mbytes_per_sec": 0 00:21:14.189 }, 00:21:14.189 "claimed": true, 00:21:14.189 "claim_type": "exclusive_write", 00:21:14.189 "zoned": false, 00:21:14.189 "supported_io_types": { 00:21:14.189 "read": true, 00:21:14.189 "write": true, 00:21:14.189 "unmap": true, 00:21:14.189 "flush": true, 00:21:14.189 "reset": true, 00:21:14.189 "nvme_admin": false, 00:21:14.189 "nvme_io": false, 00:21:14.189 "nvme_io_md": false, 00:21:14.189 "write_zeroes": true, 00:21:14.189 "zcopy": true, 00:21:14.189 "get_zone_info": false, 00:21:14.189 "zone_management": false, 00:21:14.189 "zone_append": false, 00:21:14.189 "compare": false, 00:21:14.189 "compare_and_write": false, 00:21:14.189 "abort": true, 00:21:14.189 "seek_hole": false, 00:21:14.189 "seek_data": false, 00:21:14.189 "copy": true, 00:21:14.189 "nvme_iov_md": false 00:21:14.189 }, 00:21:14.189 "memory_domains": [ 00:21:14.189 { 00:21:14.189 "dma_device_id": "system", 00:21:14.189 "dma_device_type": 1 00:21:14.189 }, 00:21:14.189 { 00:21:14.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.189 "dma_device_type": 2 00:21:14.189 } 00:21:14.189 ], 00:21:14.189 "driver_specific": {} 00:21:14.189 } 00:21:14.189 ] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.189 "name": "Existed_Raid", 00:21:14.189 "uuid": "9a96d5a3-6b97-4346-945e-15b01631d184", 00:21:14.189 "strip_size_kb": 0, 00:21:14.189 "state": "configuring", 00:21:14.189 "raid_level": "raid1", 00:21:14.189 "superblock": true, 00:21:14.189 "num_base_bdevs": 3, 00:21:14.189 "num_base_bdevs_discovered": 1, 00:21:14.189 "num_base_bdevs_operational": 3, 00:21:14.189 "base_bdevs_list": [ 00:21:14.189 { 00:21:14.189 "name": "BaseBdev1", 00:21:14.189 "uuid": "2cc38032-cbad-460e-93ef-ebcc6080fea6", 00:21:14.189 "is_configured": true, 00:21:14.189 "data_offset": 2048, 00:21:14.189 "data_size": 63488 00:21:14.189 }, 00:21:14.189 { 00:21:14.189 "name": "BaseBdev2", 00:21:14.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.189 "is_configured": false, 00:21:14.189 "data_offset": 0, 00:21:14.189 "data_size": 0 00:21:14.189 }, 00:21:14.189 { 00:21:14.189 "name": "BaseBdev3", 00:21:14.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.189 "is_configured": false, 00:21:14.189 "data_offset": 0, 00:21:14.189 "data_size": 0 00:21:14.189 } 00:21:14.189 ] 00:21:14.189 }' 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.189 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.762 [2024-12-09 23:02:49.823704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:14.762 [2024-12-09 23:02:49.823918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.762 [2024-12-09 23:02:49.831767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.762 [2024-12-09 23:02:49.834043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:14.762 [2024-12-09 23:02:49.834132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:14.762 [2024-12-09 23:02:49.834144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:14.762 [2024-12-09 23:02:49.834156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.762 "name": "Existed_Raid", 00:21:14.762 "uuid": "20ca9052-a921-4bea-82eb-ed51ebf5119e", 00:21:14.762 "strip_size_kb": 0, 00:21:14.762 "state": "configuring", 00:21:14.762 "raid_level": "raid1", 00:21:14.762 "superblock": true, 00:21:14.762 "num_base_bdevs": 3, 00:21:14.762 "num_base_bdevs_discovered": 1, 00:21:14.762 "num_base_bdevs_operational": 3, 00:21:14.762 "base_bdevs_list": [ 00:21:14.762 { 00:21:14.762 "name": "BaseBdev1", 00:21:14.762 "uuid": "2cc38032-cbad-460e-93ef-ebcc6080fea6", 00:21:14.762 "is_configured": true, 00:21:14.762 "data_offset": 2048, 00:21:14.762 "data_size": 63488 00:21:14.762 }, 00:21:14.762 { 00:21:14.762 "name": "BaseBdev2", 00:21:14.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.762 "is_configured": false, 00:21:14.762 "data_offset": 0, 00:21:14.762 "data_size": 0 00:21:14.762 }, 00:21:14.762 { 00:21:14.762 "name": "BaseBdev3", 00:21:14.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.762 "is_configured": false, 00:21:14.762 "data_offset": 0, 00:21:14.762 "data_size": 0 00:21:14.762 } 00:21:14.762 ] 00:21:14.762 }' 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.762 23:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 [2024-12-09 23:02:50.191415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.023 BaseBdev2 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 [ 00:21:15.023 { 00:21:15.023 "name": "BaseBdev2", 00:21:15.023 "aliases": [ 00:21:15.023 "0d4a4d43-c951-4ba9-9430-c697f3505dbf" 00:21:15.023 ], 00:21:15.023 "product_name": "Malloc disk", 00:21:15.023 "block_size": 512, 00:21:15.023 "num_blocks": 65536, 00:21:15.023 "uuid": "0d4a4d43-c951-4ba9-9430-c697f3505dbf", 00:21:15.023 "assigned_rate_limits": { 00:21:15.023 "rw_ios_per_sec": 0, 00:21:15.023 "rw_mbytes_per_sec": 0, 00:21:15.023 "r_mbytes_per_sec": 0, 00:21:15.023 "w_mbytes_per_sec": 0 00:21:15.023 }, 00:21:15.023 "claimed": true, 00:21:15.023 "claim_type": "exclusive_write", 00:21:15.023 "zoned": false, 00:21:15.023 "supported_io_types": { 00:21:15.023 "read": true, 00:21:15.023 "write": true, 00:21:15.023 "unmap": true, 00:21:15.023 "flush": true, 00:21:15.023 "reset": true, 00:21:15.023 "nvme_admin": false, 00:21:15.023 "nvme_io": false, 00:21:15.023 "nvme_io_md": false, 00:21:15.023 "write_zeroes": true, 00:21:15.023 "zcopy": true, 00:21:15.023 "get_zone_info": false, 00:21:15.023 "zone_management": false, 00:21:15.023 "zone_append": false, 00:21:15.023 "compare": false, 00:21:15.023 "compare_and_write": false, 00:21:15.023 "abort": true, 00:21:15.023 "seek_hole": false, 00:21:15.023 "seek_data": false, 00:21:15.023 "copy": true, 00:21:15.023 "nvme_iov_md": false 00:21:15.023 }, 00:21:15.023 "memory_domains": [ 00:21:15.023 { 00:21:15.023 "dma_device_id": "system", 00:21:15.023 "dma_device_type": 1 00:21:15.023 }, 00:21:15.023 { 00:21:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.023 "dma_device_type": 2 00:21:15.023 } 00:21:15.023 ], 00:21:15.023 "driver_specific": {} 00:21:15.023 } 00:21:15.023 ] 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.023 "name": "Existed_Raid", 00:21:15.023 "uuid": "20ca9052-a921-4bea-82eb-ed51ebf5119e", 00:21:15.023 "strip_size_kb": 0, 00:21:15.023 "state": "configuring", 00:21:15.023 "raid_level": "raid1", 00:21:15.023 "superblock": true, 00:21:15.023 "num_base_bdevs": 3, 00:21:15.023 "num_base_bdevs_discovered": 2, 00:21:15.023 "num_base_bdevs_operational": 3, 00:21:15.023 "base_bdevs_list": [ 00:21:15.023 { 00:21:15.023 "name": "BaseBdev1", 00:21:15.023 "uuid": "2cc38032-cbad-460e-93ef-ebcc6080fea6", 00:21:15.023 "is_configured": true, 00:21:15.023 "data_offset": 2048, 00:21:15.023 "data_size": 63488 00:21:15.023 }, 00:21:15.023 { 00:21:15.023 "name": "BaseBdev2", 00:21:15.023 "uuid": "0d4a4d43-c951-4ba9-9430-c697f3505dbf", 00:21:15.023 "is_configured": true, 00:21:15.023 "data_offset": 2048, 00:21:15.023 "data_size": 63488 00:21:15.023 }, 00:21:15.023 { 00:21:15.023 "name": "BaseBdev3", 00:21:15.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.023 "is_configured": false, 00:21:15.023 "data_offset": 0, 00:21:15.023 "data_size": 0 00:21:15.023 } 00:21:15.023 ] 00:21:15.023 }' 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.023 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.333 [2024-12-09 23:02:50.582327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:15.333 BaseBdev3 00:21:15.333 [2024-12-09 23:02:50.582957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:15.333 [2024-12-09 23:02:50.582996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:15.333 [2024-12-09 23:02:50.583519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:15.333 [2024-12-09 23:02:50.583699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:15.333 [2024-12-09 23:02:50.583710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:15.333 [2024-12-09 23:02:50.583871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.333 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.333 [ 00:21:15.333 { 00:21:15.333 "name": "BaseBdev3", 00:21:15.333 "aliases": [ 00:21:15.333 "aca085b5-d260-4a04-be0e-8630266819ee" 00:21:15.333 ], 00:21:15.333 "product_name": "Malloc disk", 00:21:15.333 "block_size": 512, 00:21:15.333 "num_blocks": 65536, 00:21:15.333 "uuid": "aca085b5-d260-4a04-be0e-8630266819ee", 00:21:15.334 "assigned_rate_limits": { 00:21:15.334 "rw_ios_per_sec": 0, 00:21:15.334 "rw_mbytes_per_sec": 0, 00:21:15.334 "r_mbytes_per_sec": 0, 00:21:15.334 "w_mbytes_per_sec": 0 00:21:15.334 }, 00:21:15.334 "claimed": true, 00:21:15.334 "claim_type": "exclusive_write", 00:21:15.334 "zoned": false, 00:21:15.334 "supported_io_types": { 00:21:15.334 "read": true, 00:21:15.334 "write": true, 00:21:15.334 "unmap": true, 00:21:15.334 "flush": true, 00:21:15.334 "reset": true, 00:21:15.334 "nvme_admin": false, 00:21:15.334 "nvme_io": false, 00:21:15.334 "nvme_io_md": false, 00:21:15.334 "write_zeroes": true, 00:21:15.334 "zcopy": true, 00:21:15.334 "get_zone_info": false, 00:21:15.334 "zone_management": false, 00:21:15.334 "zone_append": false, 00:21:15.334 "compare": false, 00:21:15.334 "compare_and_write": false, 00:21:15.334 "abort": true, 00:21:15.334 "seek_hole": false, 00:21:15.334 "seek_data": false, 00:21:15.334 "copy": true, 00:21:15.334 "nvme_iov_md": false 00:21:15.334 }, 00:21:15.334 "memory_domains": [ 00:21:15.334 { 00:21:15.334 "dma_device_id": "system", 00:21:15.334 "dma_device_type": 1 00:21:15.334 }, 00:21:15.334 { 00:21:15.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.334 "dma_device_type": 2 00:21:15.334 } 00:21:15.334 ], 00:21:15.334 "driver_specific": {} 00:21:15.334 } 00:21:15.334 ] 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.334 "name": "Existed_Raid", 00:21:15.334 "uuid": "20ca9052-a921-4bea-82eb-ed51ebf5119e", 00:21:15.334 "strip_size_kb": 0, 00:21:15.334 "state": "online", 00:21:15.334 "raid_level": "raid1", 00:21:15.334 "superblock": true, 00:21:15.334 "num_base_bdevs": 3, 00:21:15.334 "num_base_bdevs_discovered": 3, 00:21:15.334 "num_base_bdevs_operational": 3, 00:21:15.334 "base_bdevs_list": [ 00:21:15.334 { 00:21:15.334 "name": "BaseBdev1", 00:21:15.334 "uuid": "2cc38032-cbad-460e-93ef-ebcc6080fea6", 00:21:15.334 "is_configured": true, 00:21:15.334 "data_offset": 2048, 00:21:15.334 "data_size": 63488 00:21:15.334 }, 00:21:15.334 { 00:21:15.334 "name": "BaseBdev2", 00:21:15.334 "uuid": "0d4a4d43-c951-4ba9-9430-c697f3505dbf", 00:21:15.334 "is_configured": true, 00:21:15.334 "data_offset": 2048, 00:21:15.334 "data_size": 63488 00:21:15.334 }, 00:21:15.334 { 00:21:15.334 "name": "BaseBdev3", 00:21:15.334 "uuid": "aca085b5-d260-4a04-be0e-8630266819ee", 00:21:15.334 "is_configured": true, 00:21:15.334 "data_offset": 2048, 00:21:15.334 "data_size": 63488 00:21:15.334 } 00:21:15.334 ] 00:21:15.334 }' 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.334 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:15.595 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:15.856 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.856 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.856 [2024-12-09 23:02:50.958869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.856 23:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.856 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.856 "name": "Existed_Raid", 00:21:15.856 "aliases": [ 00:21:15.856 "20ca9052-a921-4bea-82eb-ed51ebf5119e" 00:21:15.856 ], 00:21:15.856 "product_name": "Raid Volume", 00:21:15.856 "block_size": 512, 00:21:15.856 "num_blocks": 63488, 00:21:15.856 "uuid": "20ca9052-a921-4bea-82eb-ed51ebf5119e", 00:21:15.856 "assigned_rate_limits": { 00:21:15.856 "rw_ios_per_sec": 0, 00:21:15.856 "rw_mbytes_per_sec": 0, 00:21:15.856 "r_mbytes_per_sec": 0, 00:21:15.856 "w_mbytes_per_sec": 0 00:21:15.856 }, 00:21:15.856 "claimed": false, 00:21:15.856 "zoned": false, 00:21:15.856 "supported_io_types": { 00:21:15.856 "read": true, 00:21:15.856 "write": true, 00:21:15.856 "unmap": false, 00:21:15.856 "flush": false, 00:21:15.856 "reset": true, 00:21:15.856 "nvme_admin": false, 00:21:15.856 "nvme_io": false, 00:21:15.856 "nvme_io_md": false, 00:21:15.856 "write_zeroes": true, 00:21:15.856 "zcopy": false, 00:21:15.856 "get_zone_info": false, 00:21:15.856 "zone_management": false, 00:21:15.856 "zone_append": false, 00:21:15.856 "compare": false, 00:21:15.856 "compare_and_write": false, 00:21:15.856 "abort": false, 00:21:15.856 "seek_hole": false, 00:21:15.856 "seek_data": false, 00:21:15.856 "copy": false, 00:21:15.856 "nvme_iov_md": false 00:21:15.856 }, 00:21:15.856 "memory_domains": [ 00:21:15.856 { 00:21:15.856 "dma_device_id": "system", 00:21:15.856 "dma_device_type": 1 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.856 "dma_device_type": 2 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "dma_device_id": "system", 00:21:15.856 "dma_device_type": 1 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.856 "dma_device_type": 2 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "dma_device_id": "system", 00:21:15.856 "dma_device_type": 1 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.856 "dma_device_type": 2 00:21:15.856 } 00:21:15.856 ], 00:21:15.856 "driver_specific": { 00:21:15.856 "raid": { 00:21:15.856 "uuid": "20ca9052-a921-4bea-82eb-ed51ebf5119e", 00:21:15.856 "strip_size_kb": 0, 00:21:15.856 "state": "online", 00:21:15.856 "raid_level": "raid1", 00:21:15.856 "superblock": true, 00:21:15.856 "num_base_bdevs": 3, 00:21:15.856 "num_base_bdevs_discovered": 3, 00:21:15.856 "num_base_bdevs_operational": 3, 00:21:15.856 "base_bdevs_list": [ 00:21:15.856 { 00:21:15.856 "name": "BaseBdev1", 00:21:15.856 "uuid": "2cc38032-cbad-460e-93ef-ebcc6080fea6", 00:21:15.856 "is_configured": true, 00:21:15.856 "data_offset": 2048, 00:21:15.856 "data_size": 63488 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "name": "BaseBdev2", 00:21:15.856 "uuid": "0d4a4d43-c951-4ba9-9430-c697f3505dbf", 00:21:15.856 "is_configured": true, 00:21:15.856 "data_offset": 2048, 00:21:15.856 "data_size": 63488 00:21:15.856 }, 00:21:15.856 { 00:21:15.856 "name": "BaseBdev3", 00:21:15.856 "uuid": "aca085b5-d260-4a04-be0e-8630266819ee", 00:21:15.856 "is_configured": true, 00:21:15.856 "data_offset": 2048, 00:21:15.856 "data_size": 63488 00:21:15.856 } 00:21:15.856 ] 00:21:15.856 } 00:21:15.856 } 00:21:15.856 }' 00:21:15.856 23:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:15.857 BaseBdev2 00:21:15.857 BaseBdev3' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.857 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.857 [2024-12-09 23:02:51.162599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.119 "name": "Existed_Raid", 00:21:16.119 "uuid": "20ca9052-a921-4bea-82eb-ed51ebf5119e", 00:21:16.119 "strip_size_kb": 0, 00:21:16.119 "state": "online", 00:21:16.119 "raid_level": "raid1", 00:21:16.119 "superblock": true, 00:21:16.119 "num_base_bdevs": 3, 00:21:16.119 "num_base_bdevs_discovered": 2, 00:21:16.119 "num_base_bdevs_operational": 2, 00:21:16.119 "base_bdevs_list": [ 00:21:16.119 { 00:21:16.119 "name": null, 00:21:16.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.119 "is_configured": false, 00:21:16.119 "data_offset": 0, 00:21:16.119 "data_size": 63488 00:21:16.119 }, 00:21:16.119 { 00:21:16.119 "name": "BaseBdev2", 00:21:16.119 "uuid": "0d4a4d43-c951-4ba9-9430-c697f3505dbf", 00:21:16.119 "is_configured": true, 00:21:16.119 "data_offset": 2048, 00:21:16.119 "data_size": 63488 00:21:16.119 }, 00:21:16.119 { 00:21:16.119 "name": "BaseBdev3", 00:21:16.119 "uuid": "aca085b5-d260-4a04-be0e-8630266819ee", 00:21:16.119 "is_configured": true, 00:21:16.119 "data_offset": 2048, 00:21:16.119 "data_size": 63488 00:21:16.119 } 00:21:16.119 ] 00:21:16.119 }' 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.119 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.380 [2024-12-09 23:02:51.650872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.380 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.641 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 [2024-12-09 23:02:51.761731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:16.642 [2024-12-09 23:02:51.762044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.642 [2024-12-09 23:02:51.829139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.642 [2024-12-09 23:02:51.829214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.642 [2024-12-09 23:02:51.829227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 BaseBdev2 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 [ 00:21:16.642 { 00:21:16.642 "name": "BaseBdev2", 00:21:16.642 "aliases": [ 00:21:16.642 "f69efb47-53f2-4b81-a8fe-b922ee4b98a2" 00:21:16.642 ], 00:21:16.642 "product_name": "Malloc disk", 00:21:16.642 "block_size": 512, 00:21:16.642 "num_blocks": 65536, 00:21:16.642 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:16.642 "assigned_rate_limits": { 00:21:16.642 "rw_ios_per_sec": 0, 00:21:16.642 "rw_mbytes_per_sec": 0, 00:21:16.642 "r_mbytes_per_sec": 0, 00:21:16.642 "w_mbytes_per_sec": 0 00:21:16.642 }, 00:21:16.642 "claimed": false, 00:21:16.642 "zoned": false, 00:21:16.642 "supported_io_types": { 00:21:16.642 "read": true, 00:21:16.642 "write": true, 00:21:16.642 "unmap": true, 00:21:16.642 "flush": true, 00:21:16.642 "reset": true, 00:21:16.642 "nvme_admin": false, 00:21:16.642 "nvme_io": false, 00:21:16.642 "nvme_io_md": false, 00:21:16.642 "write_zeroes": true, 00:21:16.642 "zcopy": true, 00:21:16.642 "get_zone_info": false, 00:21:16.642 "zone_management": false, 00:21:16.642 "zone_append": false, 00:21:16.642 "compare": false, 00:21:16.642 "compare_and_write": false, 00:21:16.642 "abort": true, 00:21:16.642 "seek_hole": false, 00:21:16.642 "seek_data": false, 00:21:16.642 "copy": true, 00:21:16.642 "nvme_iov_md": false 00:21:16.642 }, 00:21:16.642 "memory_domains": [ 00:21:16.642 { 00:21:16.642 "dma_device_id": "system", 00:21:16.642 "dma_device_type": 1 00:21:16.642 }, 00:21:16.642 { 00:21:16.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.642 "dma_device_type": 2 00:21:16.642 } 00:21:16.642 ], 00:21:16.642 "driver_specific": {} 00:21:16.642 } 00:21:16.642 ] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 BaseBdev3 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 [ 00:21:16.642 { 00:21:16.642 "name": "BaseBdev3", 00:21:16.642 "aliases": [ 00:21:16.642 "8017b863-c01d-4495-a0e1-ce1a96baca1b" 00:21:16.642 ], 00:21:16.642 "product_name": "Malloc disk", 00:21:16.642 "block_size": 512, 00:21:16.642 "num_blocks": 65536, 00:21:16.642 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:16.642 "assigned_rate_limits": { 00:21:16.642 "rw_ios_per_sec": 0, 00:21:16.642 "rw_mbytes_per_sec": 0, 00:21:16.642 "r_mbytes_per_sec": 0, 00:21:16.642 "w_mbytes_per_sec": 0 00:21:16.642 }, 00:21:16.642 "claimed": false, 00:21:16.642 "zoned": false, 00:21:16.642 "supported_io_types": { 00:21:16.642 "read": true, 00:21:16.642 "write": true, 00:21:16.642 "unmap": true, 00:21:16.642 "flush": true, 00:21:16.642 "reset": true, 00:21:16.642 "nvme_admin": false, 00:21:16.642 "nvme_io": false, 00:21:16.642 "nvme_io_md": false, 00:21:16.642 "write_zeroes": true, 00:21:16.642 "zcopy": true, 00:21:16.642 "get_zone_info": false, 00:21:16.642 "zone_management": false, 00:21:16.642 "zone_append": false, 00:21:16.642 "compare": false, 00:21:16.642 "compare_and_write": false, 00:21:16.642 "abort": true, 00:21:16.642 "seek_hole": false, 00:21:16.642 "seek_data": false, 00:21:16.642 "copy": true, 00:21:16.642 "nvme_iov_md": false 00:21:16.642 }, 00:21:16.642 "memory_domains": [ 00:21:16.642 { 00:21:16.642 "dma_device_id": "system", 00:21:16.642 "dma_device_type": 1 00:21:16.642 }, 00:21:16.642 { 00:21:16.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.642 "dma_device_type": 2 00:21:16.642 } 00:21:16.642 ], 00:21:16.642 "driver_specific": {} 00:21:16.642 } 00:21:16.642 ] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.642 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 [2024-12-09 23:02:51.985802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.642 [2024-12-09 23:02:51.985880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.642 [2024-12-09 23:02:51.985906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.642 [2024-12-09 23:02:51.988176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.643 23:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.903 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.903 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.903 "name": "Existed_Raid", 00:21:16.903 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:16.903 "strip_size_kb": 0, 00:21:16.903 "state": "configuring", 00:21:16.903 "raid_level": "raid1", 00:21:16.903 "superblock": true, 00:21:16.903 "num_base_bdevs": 3, 00:21:16.903 "num_base_bdevs_discovered": 2, 00:21:16.903 "num_base_bdevs_operational": 3, 00:21:16.903 "base_bdevs_list": [ 00:21:16.903 { 00:21:16.903 "name": "BaseBdev1", 00:21:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.903 "is_configured": false, 00:21:16.903 "data_offset": 0, 00:21:16.903 "data_size": 0 00:21:16.903 }, 00:21:16.903 { 00:21:16.903 "name": "BaseBdev2", 00:21:16.903 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:16.903 "is_configured": true, 00:21:16.903 "data_offset": 2048, 00:21:16.903 "data_size": 63488 00:21:16.903 }, 00:21:16.903 { 00:21:16.903 "name": "BaseBdev3", 00:21:16.903 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:16.903 "is_configured": true, 00:21:16.903 "data_offset": 2048, 00:21:16.903 "data_size": 63488 00:21:16.903 } 00:21:16.903 ] 00:21:16.903 }' 00:21:16.903 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.903 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.164 [2024-12-09 23:02:52.353928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.164 "name": "Existed_Raid", 00:21:17.164 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:17.164 "strip_size_kb": 0, 00:21:17.164 "state": "configuring", 00:21:17.164 "raid_level": "raid1", 00:21:17.164 "superblock": true, 00:21:17.164 "num_base_bdevs": 3, 00:21:17.164 "num_base_bdevs_discovered": 1, 00:21:17.164 "num_base_bdevs_operational": 3, 00:21:17.164 "base_bdevs_list": [ 00:21:17.164 { 00:21:17.164 "name": "BaseBdev1", 00:21:17.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.164 "is_configured": false, 00:21:17.164 "data_offset": 0, 00:21:17.164 "data_size": 0 00:21:17.164 }, 00:21:17.164 { 00:21:17.164 "name": null, 00:21:17.164 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:17.164 "is_configured": false, 00:21:17.164 "data_offset": 0, 00:21:17.164 "data_size": 63488 00:21:17.164 }, 00:21:17.164 { 00:21:17.164 "name": "BaseBdev3", 00:21:17.164 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:17.164 "is_configured": true, 00:21:17.164 "data_offset": 2048, 00:21:17.164 "data_size": 63488 00:21:17.164 } 00:21:17.164 ] 00:21:17.164 }' 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.164 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 [2024-12-09 23:02:52.743192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.425 BaseBdev1 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 [ 00:21:17.425 { 00:21:17.425 "name": "BaseBdev1", 00:21:17.425 "aliases": [ 00:21:17.425 "d7d757e7-df54-4a4f-a21b-449849d7f871" 00:21:17.425 ], 00:21:17.425 "product_name": "Malloc disk", 00:21:17.425 "block_size": 512, 00:21:17.425 "num_blocks": 65536, 00:21:17.425 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:17.425 "assigned_rate_limits": { 00:21:17.425 "rw_ios_per_sec": 0, 00:21:17.425 "rw_mbytes_per_sec": 0, 00:21:17.425 "r_mbytes_per_sec": 0, 00:21:17.425 "w_mbytes_per_sec": 0 00:21:17.425 }, 00:21:17.425 "claimed": true, 00:21:17.425 "claim_type": "exclusive_write", 00:21:17.425 "zoned": false, 00:21:17.425 "supported_io_types": { 00:21:17.425 "read": true, 00:21:17.425 "write": true, 00:21:17.425 "unmap": true, 00:21:17.425 "flush": true, 00:21:17.425 "reset": true, 00:21:17.425 "nvme_admin": false, 00:21:17.425 "nvme_io": false, 00:21:17.425 "nvme_io_md": false, 00:21:17.425 "write_zeroes": true, 00:21:17.425 "zcopy": true, 00:21:17.425 "get_zone_info": false, 00:21:17.425 "zone_management": false, 00:21:17.425 "zone_append": false, 00:21:17.425 "compare": false, 00:21:17.425 "compare_and_write": false, 00:21:17.425 "abort": true, 00:21:17.425 "seek_hole": false, 00:21:17.425 "seek_data": false, 00:21:17.425 "copy": true, 00:21:17.425 "nvme_iov_md": false 00:21:17.425 }, 00:21:17.425 "memory_domains": [ 00:21:17.425 { 00:21:17.425 "dma_device_id": "system", 00:21:17.425 "dma_device_type": 1 00:21:17.425 }, 00:21:17.425 { 00:21:17.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.425 "dma_device_type": 2 00:21:17.425 } 00:21:17.425 ], 00:21:17.425 "driver_specific": {} 00:21:17.425 } 00:21:17.425 ] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.686 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.686 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.686 "name": "Existed_Raid", 00:21:17.686 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:17.686 "strip_size_kb": 0, 00:21:17.686 "state": "configuring", 00:21:17.686 "raid_level": "raid1", 00:21:17.686 "superblock": true, 00:21:17.686 "num_base_bdevs": 3, 00:21:17.686 "num_base_bdevs_discovered": 2, 00:21:17.686 "num_base_bdevs_operational": 3, 00:21:17.686 "base_bdevs_list": [ 00:21:17.686 { 00:21:17.686 "name": "BaseBdev1", 00:21:17.686 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:17.686 "is_configured": true, 00:21:17.686 "data_offset": 2048, 00:21:17.686 "data_size": 63488 00:21:17.686 }, 00:21:17.686 { 00:21:17.686 "name": null, 00:21:17.686 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:17.686 "is_configured": false, 00:21:17.686 "data_offset": 0, 00:21:17.686 "data_size": 63488 00:21:17.686 }, 00:21:17.686 { 00:21:17.686 "name": "BaseBdev3", 00:21:17.686 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:17.686 "is_configured": true, 00:21:17.686 "data_offset": 2048, 00:21:17.686 "data_size": 63488 00:21:17.686 } 00:21:17.686 ] 00:21:17.686 }' 00:21:17.686 23:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.686 23:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.948 [2024-12-09 23:02:53.143450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.948 "name": "Existed_Raid", 00:21:17.948 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:17.948 "strip_size_kb": 0, 00:21:17.948 "state": "configuring", 00:21:17.948 "raid_level": "raid1", 00:21:17.948 "superblock": true, 00:21:17.948 "num_base_bdevs": 3, 00:21:17.948 "num_base_bdevs_discovered": 1, 00:21:17.948 "num_base_bdevs_operational": 3, 00:21:17.948 "base_bdevs_list": [ 00:21:17.948 { 00:21:17.948 "name": "BaseBdev1", 00:21:17.948 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:17.948 "is_configured": true, 00:21:17.948 "data_offset": 2048, 00:21:17.948 "data_size": 63488 00:21:17.948 }, 00:21:17.948 { 00:21:17.948 "name": null, 00:21:17.948 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:17.948 "is_configured": false, 00:21:17.948 "data_offset": 0, 00:21:17.948 "data_size": 63488 00:21:17.948 }, 00:21:17.948 { 00:21:17.948 "name": null, 00:21:17.948 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:17.948 "is_configured": false, 00:21:17.948 "data_offset": 0, 00:21:17.948 "data_size": 63488 00:21:17.948 } 00:21:17.948 ] 00:21:17.948 }' 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.948 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.210 [2024-12-09 23:02:53.539603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.210 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.470 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.470 "name": "Existed_Raid", 00:21:18.470 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:18.470 "strip_size_kb": 0, 00:21:18.470 "state": "configuring", 00:21:18.470 "raid_level": "raid1", 00:21:18.470 "superblock": true, 00:21:18.470 "num_base_bdevs": 3, 00:21:18.470 "num_base_bdevs_discovered": 2, 00:21:18.470 "num_base_bdevs_operational": 3, 00:21:18.470 "base_bdevs_list": [ 00:21:18.470 { 00:21:18.470 "name": "BaseBdev1", 00:21:18.470 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:18.470 "is_configured": true, 00:21:18.470 "data_offset": 2048, 00:21:18.470 "data_size": 63488 00:21:18.470 }, 00:21:18.470 { 00:21:18.471 "name": null, 00:21:18.471 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:18.471 "is_configured": false, 00:21:18.471 "data_offset": 0, 00:21:18.471 "data_size": 63488 00:21:18.471 }, 00:21:18.471 { 00:21:18.471 "name": "BaseBdev3", 00:21:18.471 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:18.471 "is_configured": true, 00:21:18.471 "data_offset": 2048, 00:21:18.471 "data_size": 63488 00:21:18.471 } 00:21:18.471 ] 00:21:18.471 }' 00:21:18.471 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.471 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.735 [2024-12-09 23:02:53.891664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.735 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.735 "name": "Existed_Raid", 00:21:18.735 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:18.735 "strip_size_kb": 0, 00:21:18.735 "state": "configuring", 00:21:18.735 "raid_level": "raid1", 00:21:18.735 "superblock": true, 00:21:18.735 "num_base_bdevs": 3, 00:21:18.735 "num_base_bdevs_discovered": 1, 00:21:18.735 "num_base_bdevs_operational": 3, 00:21:18.735 "base_bdevs_list": [ 00:21:18.735 { 00:21:18.735 "name": null, 00:21:18.735 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:18.735 "is_configured": false, 00:21:18.735 "data_offset": 0, 00:21:18.735 "data_size": 63488 00:21:18.735 }, 00:21:18.735 { 00:21:18.735 "name": null, 00:21:18.735 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:18.735 "is_configured": false, 00:21:18.735 "data_offset": 0, 00:21:18.735 "data_size": 63488 00:21:18.735 }, 00:21:18.735 { 00:21:18.735 "name": "BaseBdev3", 00:21:18.735 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:18.735 "is_configured": true, 00:21:18.736 "data_offset": 2048, 00:21:18.736 "data_size": 63488 00:21:18.736 } 00:21:18.736 ] 00:21:18.736 }' 00:21:18.736 23:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.736 23:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.997 [2024-12-09 23:02:54.313468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.997 "name": "Existed_Raid", 00:21:18.997 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:18.997 "strip_size_kb": 0, 00:21:18.997 "state": "configuring", 00:21:18.997 "raid_level": "raid1", 00:21:18.997 "superblock": true, 00:21:18.997 "num_base_bdevs": 3, 00:21:18.997 "num_base_bdevs_discovered": 2, 00:21:18.997 "num_base_bdevs_operational": 3, 00:21:18.997 "base_bdevs_list": [ 00:21:18.997 { 00:21:18.997 "name": null, 00:21:18.997 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:18.997 "is_configured": false, 00:21:18.997 "data_offset": 0, 00:21:18.997 "data_size": 63488 00:21:18.997 }, 00:21:18.997 { 00:21:18.997 "name": "BaseBdev2", 00:21:18.997 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:18.997 "is_configured": true, 00:21:18.997 "data_offset": 2048, 00:21:18.997 "data_size": 63488 00:21:18.997 }, 00:21:18.997 { 00:21:18.997 "name": "BaseBdev3", 00:21:18.997 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:18.997 "is_configured": true, 00:21:18.997 "data_offset": 2048, 00:21:18.997 "data_size": 63488 00:21:18.997 } 00:21:18.997 ] 00:21:18.997 }' 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.997 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7d757e7-df54-4a4f-a21b-449849d7f871 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 [2024-12-09 23:02:54.741889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:19.596 [2024-12-09 23:02:54.742205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:19.596 [2024-12-09 23:02:54.742224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:19.596 [2024-12-09 23:02:54.742534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:19.596 NewBaseBdev 00:21:19.596 [2024-12-09 23:02:54.742686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:19.596 [2024-12-09 23:02:54.742698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:19.596 [2024-12-09 23:02:54.742841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.597 [ 00:21:19.597 { 00:21:19.597 "name": "NewBaseBdev", 00:21:19.597 "aliases": [ 00:21:19.597 "d7d757e7-df54-4a4f-a21b-449849d7f871" 00:21:19.597 ], 00:21:19.597 "product_name": "Malloc disk", 00:21:19.597 "block_size": 512, 00:21:19.597 "num_blocks": 65536, 00:21:19.597 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:19.597 "assigned_rate_limits": { 00:21:19.597 "rw_ios_per_sec": 0, 00:21:19.597 "rw_mbytes_per_sec": 0, 00:21:19.597 "r_mbytes_per_sec": 0, 00:21:19.597 "w_mbytes_per_sec": 0 00:21:19.597 }, 00:21:19.597 "claimed": true, 00:21:19.597 "claim_type": "exclusive_write", 00:21:19.597 "zoned": false, 00:21:19.597 "supported_io_types": { 00:21:19.597 "read": true, 00:21:19.597 "write": true, 00:21:19.597 "unmap": true, 00:21:19.597 "flush": true, 00:21:19.597 "reset": true, 00:21:19.597 "nvme_admin": false, 00:21:19.597 "nvme_io": false, 00:21:19.597 "nvme_io_md": false, 00:21:19.597 "write_zeroes": true, 00:21:19.597 "zcopy": true, 00:21:19.597 "get_zone_info": false, 00:21:19.597 "zone_management": false, 00:21:19.597 "zone_append": false, 00:21:19.597 "compare": false, 00:21:19.597 "compare_and_write": false, 00:21:19.597 "abort": true, 00:21:19.597 "seek_hole": false, 00:21:19.597 "seek_data": false, 00:21:19.597 "copy": true, 00:21:19.597 "nvme_iov_md": false 00:21:19.597 }, 00:21:19.597 "memory_domains": [ 00:21:19.597 { 00:21:19.597 "dma_device_id": "system", 00:21:19.597 "dma_device_type": 1 00:21:19.597 }, 00:21:19.597 { 00:21:19.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.597 "dma_device_type": 2 00:21:19.597 } 00:21:19.597 ], 00:21:19.597 "driver_specific": {} 00:21:19.597 } 00:21:19.597 ] 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.597 "name": "Existed_Raid", 00:21:19.597 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:19.597 "strip_size_kb": 0, 00:21:19.597 "state": "online", 00:21:19.597 "raid_level": "raid1", 00:21:19.597 "superblock": true, 00:21:19.597 "num_base_bdevs": 3, 00:21:19.597 "num_base_bdevs_discovered": 3, 00:21:19.597 "num_base_bdevs_operational": 3, 00:21:19.597 "base_bdevs_list": [ 00:21:19.597 { 00:21:19.597 "name": "NewBaseBdev", 00:21:19.597 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:19.597 "is_configured": true, 00:21:19.597 "data_offset": 2048, 00:21:19.597 "data_size": 63488 00:21:19.597 }, 00:21:19.597 { 00:21:19.597 "name": "BaseBdev2", 00:21:19.597 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:19.597 "is_configured": true, 00:21:19.597 "data_offset": 2048, 00:21:19.597 "data_size": 63488 00:21:19.597 }, 00:21:19.597 { 00:21:19.597 "name": "BaseBdev3", 00:21:19.597 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:19.597 "is_configured": true, 00:21:19.597 "data_offset": 2048, 00:21:19.597 "data_size": 63488 00:21:19.597 } 00:21:19.597 ] 00:21:19.597 }' 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.597 23:02:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.858 [2024-12-09 23:02:55.094415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.858 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:19.858 "name": "Existed_Raid", 00:21:19.858 "aliases": [ 00:21:19.858 "3ca4e6cb-00c2-4264-9180-e562b59ed73f" 00:21:19.858 ], 00:21:19.858 "product_name": "Raid Volume", 00:21:19.858 "block_size": 512, 00:21:19.858 "num_blocks": 63488, 00:21:19.858 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:19.858 "assigned_rate_limits": { 00:21:19.858 "rw_ios_per_sec": 0, 00:21:19.858 "rw_mbytes_per_sec": 0, 00:21:19.858 "r_mbytes_per_sec": 0, 00:21:19.858 "w_mbytes_per_sec": 0 00:21:19.858 }, 00:21:19.858 "claimed": false, 00:21:19.858 "zoned": false, 00:21:19.858 "supported_io_types": { 00:21:19.859 "read": true, 00:21:19.859 "write": true, 00:21:19.859 "unmap": false, 00:21:19.859 "flush": false, 00:21:19.859 "reset": true, 00:21:19.859 "nvme_admin": false, 00:21:19.859 "nvme_io": false, 00:21:19.859 "nvme_io_md": false, 00:21:19.859 "write_zeroes": true, 00:21:19.859 "zcopy": false, 00:21:19.859 "get_zone_info": false, 00:21:19.859 "zone_management": false, 00:21:19.859 "zone_append": false, 00:21:19.859 "compare": false, 00:21:19.859 "compare_and_write": false, 00:21:19.859 "abort": false, 00:21:19.859 "seek_hole": false, 00:21:19.859 "seek_data": false, 00:21:19.859 "copy": false, 00:21:19.859 "nvme_iov_md": false 00:21:19.859 }, 00:21:19.859 "memory_domains": [ 00:21:19.859 { 00:21:19.859 "dma_device_id": "system", 00:21:19.859 "dma_device_type": 1 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.859 "dma_device_type": 2 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "dma_device_id": "system", 00:21:19.859 "dma_device_type": 1 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.859 "dma_device_type": 2 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "dma_device_id": "system", 00:21:19.859 "dma_device_type": 1 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.859 "dma_device_type": 2 00:21:19.859 } 00:21:19.859 ], 00:21:19.859 "driver_specific": { 00:21:19.859 "raid": { 00:21:19.859 "uuid": "3ca4e6cb-00c2-4264-9180-e562b59ed73f", 00:21:19.859 "strip_size_kb": 0, 00:21:19.859 "state": "online", 00:21:19.859 "raid_level": "raid1", 00:21:19.859 "superblock": true, 00:21:19.859 "num_base_bdevs": 3, 00:21:19.859 "num_base_bdevs_discovered": 3, 00:21:19.859 "num_base_bdevs_operational": 3, 00:21:19.859 "base_bdevs_list": [ 00:21:19.859 { 00:21:19.859 "name": "NewBaseBdev", 00:21:19.859 "uuid": "d7d757e7-df54-4a4f-a21b-449849d7f871", 00:21:19.859 "is_configured": true, 00:21:19.859 "data_offset": 2048, 00:21:19.859 "data_size": 63488 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "name": "BaseBdev2", 00:21:19.859 "uuid": "f69efb47-53f2-4b81-a8fe-b922ee4b98a2", 00:21:19.859 "is_configured": true, 00:21:19.859 "data_offset": 2048, 00:21:19.859 "data_size": 63488 00:21:19.859 }, 00:21:19.859 { 00:21:19.859 "name": "BaseBdev3", 00:21:19.859 "uuid": "8017b863-c01d-4495-a0e1-ce1a96baca1b", 00:21:19.859 "is_configured": true, 00:21:19.859 "data_offset": 2048, 00:21:19.859 "data_size": 63488 00:21:19.859 } 00:21:19.859 ] 00:21:19.859 } 00:21:19.859 } 00:21:19.859 }' 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:19.859 BaseBdev2 00:21:19.859 BaseBdev3' 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.859 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.159 [2024-12-09 23:02:55.294114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:20.159 [2024-12-09 23:02:55.294157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.159 [2024-12-09 23:02:55.294245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.159 [2024-12-09 23:02:55.294568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.159 [2024-12-09 23:02:55.294578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66355 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66355 ']' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66355 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66355 00:21:20.159 killing process with pid 66355 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66355' 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66355 00:21:20.159 23:02:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66355 00:21:20.159 [2024-12-09 23:02:55.329056] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.421 [2024-12-09 23:02:55.544123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.363 23:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:21.363 00:21:21.363 real 0m8.260s 00:21:21.363 user 0m12.789s 00:21:21.363 sys 0m1.559s 00:21:21.363 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.363 ************************************ 00:21:21.363 END TEST raid_state_function_test_sb 00:21:21.363 ************************************ 00:21:21.363 23:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.363 23:02:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:21.363 23:02:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:21.363 23:02:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.363 23:02:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.363 ************************************ 00:21:21.363 START TEST raid_superblock_test 00:21:21.363 ************************************ 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66959 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66959 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66959 ']' 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.363 23:02:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.363 [2024-12-09 23:02:56.516529] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:21.363 [2024-12-09 23:02:56.516977] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66959 ] 00:21:21.363 [2024-12-09 23:02:56.677580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.623 [2024-12-09 23:02:56.819603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.883 [2024-12-09 23:02:56.986730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.883 [2024-12-09 23:02:56.987035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.143 malloc1 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.143 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.143 [2024-12-09 23:02:57.496838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.143 [2024-12-09 23:02:57.496919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.143 [2024-12-09 23:02:57.496944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:22.143 [2024-12-09 23:02:57.496955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.143 [2024-12-09 23:02:57.499538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.143 [2024-12-09 23:02:57.499590] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.405 pt1 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 malloc2 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 [2024-12-09 23:02:57.546411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:22.405 [2024-12-09 23:02:57.546484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.405 [2024-12-09 23:02:57.546515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:22.405 [2024-12-09 23:02:57.546526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.405 [2024-12-09 23:02:57.549052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.405 [2024-12-09 23:02:57.549275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:22.405 pt2 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 malloc3 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 [2024-12-09 23:02:57.604603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:22.405 [2024-12-09 23:02:57.604866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.405 [2024-12-09 23:02:57.604906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:22.405 [2024-12-09 23:02:57.604918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.405 [2024-12-09 23:02:57.607819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.405 [2024-12-09 23:02:57.607874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:22.405 pt3 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.405 [2024-12-09 23:02:57.616897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.405 [2024-12-09 23:02:57.619024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.405 [2024-12-09 23:02:57.619142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:22.405 [2024-12-09 23:02:57.619338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:22.405 [2024-12-09 23:02:57.619360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:22.405 [2024-12-09 23:02:57.619651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:22.405 [2024-12-09 23:02:57.619893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:22.405 [2024-12-09 23:02:57.619910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:22.405 [2024-12-09 23:02:57.620076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.405 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.406 "name": "raid_bdev1", 00:21:22.406 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:22.406 "strip_size_kb": 0, 00:21:22.406 "state": "online", 00:21:22.406 "raid_level": "raid1", 00:21:22.406 "superblock": true, 00:21:22.406 "num_base_bdevs": 3, 00:21:22.406 "num_base_bdevs_discovered": 3, 00:21:22.406 "num_base_bdevs_operational": 3, 00:21:22.406 "base_bdevs_list": [ 00:21:22.406 { 00:21:22.406 "name": "pt1", 00:21:22.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.406 "is_configured": true, 00:21:22.406 "data_offset": 2048, 00:21:22.406 "data_size": 63488 00:21:22.406 }, 00:21:22.406 { 00:21:22.406 "name": "pt2", 00:21:22.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.406 "is_configured": true, 00:21:22.406 "data_offset": 2048, 00:21:22.406 "data_size": 63488 00:21:22.406 }, 00:21:22.406 { 00:21:22.406 "name": "pt3", 00:21:22.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.406 "is_configured": true, 00:21:22.406 "data_offset": 2048, 00:21:22.406 "data_size": 63488 00:21:22.406 } 00:21:22.406 ] 00:21:22.406 }' 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.406 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.664 [2024-12-09 23:02:57.969346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.664 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.664 "name": "raid_bdev1", 00:21:22.664 "aliases": [ 00:21:22.664 "144920b3-6302-4a03-b696-f97f01fdb9cd" 00:21:22.664 ], 00:21:22.664 "product_name": "Raid Volume", 00:21:22.664 "block_size": 512, 00:21:22.664 "num_blocks": 63488, 00:21:22.664 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:22.664 "assigned_rate_limits": { 00:21:22.664 "rw_ios_per_sec": 0, 00:21:22.664 "rw_mbytes_per_sec": 0, 00:21:22.664 "r_mbytes_per_sec": 0, 00:21:22.664 "w_mbytes_per_sec": 0 00:21:22.664 }, 00:21:22.664 "claimed": false, 00:21:22.664 "zoned": false, 00:21:22.664 "supported_io_types": { 00:21:22.664 "read": true, 00:21:22.664 "write": true, 00:21:22.664 "unmap": false, 00:21:22.664 "flush": false, 00:21:22.664 "reset": true, 00:21:22.664 "nvme_admin": false, 00:21:22.664 "nvme_io": false, 00:21:22.664 "nvme_io_md": false, 00:21:22.664 "write_zeroes": true, 00:21:22.664 "zcopy": false, 00:21:22.664 "get_zone_info": false, 00:21:22.664 "zone_management": false, 00:21:22.664 "zone_append": false, 00:21:22.664 "compare": false, 00:21:22.664 "compare_and_write": false, 00:21:22.664 "abort": false, 00:21:22.664 "seek_hole": false, 00:21:22.664 "seek_data": false, 00:21:22.664 "copy": false, 00:21:22.664 "nvme_iov_md": false 00:21:22.664 }, 00:21:22.664 "memory_domains": [ 00:21:22.665 { 00:21:22.665 "dma_device_id": "system", 00:21:22.665 "dma_device_type": 1 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.665 "dma_device_type": 2 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "dma_device_id": "system", 00:21:22.665 "dma_device_type": 1 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.665 "dma_device_type": 2 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "dma_device_id": "system", 00:21:22.665 "dma_device_type": 1 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.665 "dma_device_type": 2 00:21:22.665 } 00:21:22.665 ], 00:21:22.665 "driver_specific": { 00:21:22.665 "raid": { 00:21:22.665 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:22.665 "strip_size_kb": 0, 00:21:22.665 "state": "online", 00:21:22.665 "raid_level": "raid1", 00:21:22.665 "superblock": true, 00:21:22.665 "num_base_bdevs": 3, 00:21:22.665 "num_base_bdevs_discovered": 3, 00:21:22.665 "num_base_bdevs_operational": 3, 00:21:22.665 "base_bdevs_list": [ 00:21:22.665 { 00:21:22.665 "name": "pt1", 00:21:22.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.665 "is_configured": true, 00:21:22.665 "data_offset": 2048, 00:21:22.665 "data_size": 63488 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "name": "pt2", 00:21:22.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.665 "is_configured": true, 00:21:22.665 "data_offset": 2048, 00:21:22.665 "data_size": 63488 00:21:22.665 }, 00:21:22.665 { 00:21:22.665 "name": "pt3", 00:21:22.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.665 "is_configured": true, 00:21:22.665 "data_offset": 2048, 00:21:22.665 "data_size": 63488 00:21:22.665 } 00:21:22.665 ] 00:21:22.665 } 00:21:22.665 } 00:21:22.665 }' 00:21:22.665 23:02:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:22.925 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:22.926 pt2 00:21:22.926 pt3' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:22.926 [2024-12-09 23:02:58.177315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=144920b3-6302-4a03-b696-f97f01fdb9cd 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 144920b3-6302-4a03-b696-f97f01fdb9cd ']' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 [2024-12-09 23:02:58.208985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.926 [2024-12-09 23:02:58.209028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.926 [2024-12-09 23:02:58.209136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.926 [2024-12-09 23:02:58.209229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.926 [2024-12-09 23:02:58.209242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.926 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.188 [2024-12-09 23:02:58.321047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:23.188 [2024-12-09 23:02:58.323210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:23.188 [2024-12-09 23:02:58.323286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:23.188 [2024-12-09 23:02:58.323345] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:23.188 [2024-12-09 23:02:58.323398] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:23.188 [2024-12-09 23:02:58.323421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:23.188 [2024-12-09 23:02:58.323440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.188 [2024-12-09 23:02:58.323450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:23.188 request: 00:21:23.188 { 00:21:23.188 "name": "raid_bdev1", 00:21:23.188 "raid_level": "raid1", 00:21:23.188 "base_bdevs": [ 00:21:23.188 "malloc1", 00:21:23.188 "malloc2", 00:21:23.188 "malloc3" 00:21:23.188 ], 00:21:23.188 "superblock": false, 00:21:23.188 "method": "bdev_raid_create", 00:21:23.188 "req_id": 1 00:21:23.188 } 00:21:23.188 Got JSON-RPC error response 00:21:23.188 response: 00:21:23.188 { 00:21:23.188 "code": -17, 00:21:23.188 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:23.188 } 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:23.188 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.189 [2024-12-09 23:02:58.368997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:23.189 [2024-12-09 23:02:58.369059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.189 [2024-12-09 23:02:58.369078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:23.189 [2024-12-09 23:02:58.369088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.189 [2024-12-09 23:02:58.371662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.189 [2024-12-09 23:02:58.371706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:23.189 [2024-12-09 23:02:58.371800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:23.189 [2024-12-09 23:02:58.371856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:23.189 pt1 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.189 "name": "raid_bdev1", 00:21:23.189 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:23.189 "strip_size_kb": 0, 00:21:23.189 "state": "configuring", 00:21:23.189 "raid_level": "raid1", 00:21:23.189 "superblock": true, 00:21:23.189 "num_base_bdevs": 3, 00:21:23.189 "num_base_bdevs_discovered": 1, 00:21:23.189 "num_base_bdevs_operational": 3, 00:21:23.189 "base_bdevs_list": [ 00:21:23.189 { 00:21:23.189 "name": "pt1", 00:21:23.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.189 "is_configured": true, 00:21:23.189 "data_offset": 2048, 00:21:23.189 "data_size": 63488 00:21:23.189 }, 00:21:23.189 { 00:21:23.189 "name": null, 00:21:23.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.189 "is_configured": false, 00:21:23.189 "data_offset": 2048, 00:21:23.189 "data_size": 63488 00:21:23.189 }, 00:21:23.189 { 00:21:23.189 "name": null, 00:21:23.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.189 "is_configured": false, 00:21:23.189 "data_offset": 2048, 00:21:23.189 "data_size": 63488 00:21:23.189 } 00:21:23.189 ] 00:21:23.189 }' 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.189 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.452 [2024-12-09 23:02:58.705136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.452 [2024-12-09 23:02:58.705208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.452 [2024-12-09 23:02:58.705233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:23.452 [2024-12-09 23:02:58.705243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.452 [2024-12-09 23:02:58.705735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.452 [2024-12-09 23:02:58.705750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.452 [2024-12-09 23:02:58.705847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.452 [2024-12-09 23:02:58.705870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.452 pt2 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.452 [2024-12-09 23:02:58.713143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.452 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.453 "name": "raid_bdev1", 00:21:23.453 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:23.453 "strip_size_kb": 0, 00:21:23.453 "state": "configuring", 00:21:23.453 "raid_level": "raid1", 00:21:23.453 "superblock": true, 00:21:23.453 "num_base_bdevs": 3, 00:21:23.453 "num_base_bdevs_discovered": 1, 00:21:23.453 "num_base_bdevs_operational": 3, 00:21:23.453 "base_bdevs_list": [ 00:21:23.453 { 00:21:23.453 "name": "pt1", 00:21:23.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.453 "is_configured": true, 00:21:23.453 "data_offset": 2048, 00:21:23.453 "data_size": 63488 00:21:23.453 }, 00:21:23.453 { 00:21:23.453 "name": null, 00:21:23.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.453 "is_configured": false, 00:21:23.453 "data_offset": 0, 00:21:23.453 "data_size": 63488 00:21:23.453 }, 00:21:23.453 { 00:21:23.453 "name": null, 00:21:23.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.453 "is_configured": false, 00:21:23.453 "data_offset": 2048, 00:21:23.453 "data_size": 63488 00:21:23.453 } 00:21:23.453 ] 00:21:23.453 }' 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.453 23:02:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.718 [2024-12-09 23:02:59.041204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.718 [2024-12-09 23:02:59.041279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.718 [2024-12-09 23:02:59.041301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:23.718 [2024-12-09 23:02:59.041313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.718 [2024-12-09 23:02:59.041839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.718 [2024-12-09 23:02:59.041873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.718 [2024-12-09 23:02:59.041964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.718 [2024-12-09 23:02:59.041998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.718 pt2 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.718 [2024-12-09 23:02:59.049187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:23.718 [2024-12-09 23:02:59.049238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.718 [2024-12-09 23:02:59.049252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:23.718 [2024-12-09 23:02:59.049264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.718 [2024-12-09 23:02:59.049681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.718 [2024-12-09 23:02:59.049710] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:23.718 [2024-12-09 23:02:59.049777] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:23.718 [2024-12-09 23:02:59.049798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:23.718 [2024-12-09 23:02:59.049926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:23.718 [2024-12-09 23:02:59.049940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:23.718 [2024-12-09 23:02:59.050217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:23.718 [2024-12-09 23:02:59.050409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:23.718 [2024-12-09 23:02:59.050419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:23.718 [2024-12-09 23:02:59.050565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.718 pt3 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.718 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.979 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.979 "name": "raid_bdev1", 00:21:23.979 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:23.979 "strip_size_kb": 0, 00:21:23.979 "state": "online", 00:21:23.979 "raid_level": "raid1", 00:21:23.979 "superblock": true, 00:21:23.979 "num_base_bdevs": 3, 00:21:23.979 "num_base_bdevs_discovered": 3, 00:21:23.979 "num_base_bdevs_operational": 3, 00:21:23.979 "base_bdevs_list": [ 00:21:23.979 { 00:21:23.979 "name": "pt1", 00:21:23.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.979 "is_configured": true, 00:21:23.979 "data_offset": 2048, 00:21:23.979 "data_size": 63488 00:21:23.979 }, 00:21:23.979 { 00:21:23.979 "name": "pt2", 00:21:23.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.979 "is_configured": true, 00:21:23.979 "data_offset": 2048, 00:21:23.979 "data_size": 63488 00:21:23.979 }, 00:21:23.979 { 00:21:23.979 "name": "pt3", 00:21:23.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.979 "is_configured": true, 00:21:23.979 "data_offset": 2048, 00:21:23.979 "data_size": 63488 00:21:23.979 } 00:21:23.979 ] 00:21:23.979 }' 00:21:23.980 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.980 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.240 [2024-12-09 23:02:59.381672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.240 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:24.240 "name": "raid_bdev1", 00:21:24.240 "aliases": [ 00:21:24.240 "144920b3-6302-4a03-b696-f97f01fdb9cd" 00:21:24.240 ], 00:21:24.240 "product_name": "Raid Volume", 00:21:24.240 "block_size": 512, 00:21:24.240 "num_blocks": 63488, 00:21:24.240 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:24.240 "assigned_rate_limits": { 00:21:24.240 "rw_ios_per_sec": 0, 00:21:24.240 "rw_mbytes_per_sec": 0, 00:21:24.240 "r_mbytes_per_sec": 0, 00:21:24.240 "w_mbytes_per_sec": 0 00:21:24.240 }, 00:21:24.240 "claimed": false, 00:21:24.240 "zoned": false, 00:21:24.240 "supported_io_types": { 00:21:24.240 "read": true, 00:21:24.240 "write": true, 00:21:24.240 "unmap": false, 00:21:24.240 "flush": false, 00:21:24.241 "reset": true, 00:21:24.241 "nvme_admin": false, 00:21:24.241 "nvme_io": false, 00:21:24.241 "nvme_io_md": false, 00:21:24.241 "write_zeroes": true, 00:21:24.241 "zcopy": false, 00:21:24.241 "get_zone_info": false, 00:21:24.241 "zone_management": false, 00:21:24.241 "zone_append": false, 00:21:24.241 "compare": false, 00:21:24.241 "compare_and_write": false, 00:21:24.241 "abort": false, 00:21:24.241 "seek_hole": false, 00:21:24.241 "seek_data": false, 00:21:24.241 "copy": false, 00:21:24.241 "nvme_iov_md": false 00:21:24.241 }, 00:21:24.241 "memory_domains": [ 00:21:24.241 { 00:21:24.241 "dma_device_id": "system", 00:21:24.241 "dma_device_type": 1 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.241 "dma_device_type": 2 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "dma_device_id": "system", 00:21:24.241 "dma_device_type": 1 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.241 "dma_device_type": 2 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "dma_device_id": "system", 00:21:24.241 "dma_device_type": 1 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.241 "dma_device_type": 2 00:21:24.241 } 00:21:24.241 ], 00:21:24.241 "driver_specific": { 00:21:24.241 "raid": { 00:21:24.241 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:24.241 "strip_size_kb": 0, 00:21:24.241 "state": "online", 00:21:24.241 "raid_level": "raid1", 00:21:24.241 "superblock": true, 00:21:24.241 "num_base_bdevs": 3, 00:21:24.241 "num_base_bdevs_discovered": 3, 00:21:24.241 "num_base_bdevs_operational": 3, 00:21:24.241 "base_bdevs_list": [ 00:21:24.241 { 00:21:24.241 "name": "pt1", 00:21:24.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:24.241 "is_configured": true, 00:21:24.241 "data_offset": 2048, 00:21:24.241 "data_size": 63488 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "name": "pt2", 00:21:24.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.241 "is_configured": true, 00:21:24.241 "data_offset": 2048, 00:21:24.241 "data_size": 63488 00:21:24.241 }, 00:21:24.241 { 00:21:24.241 "name": "pt3", 00:21:24.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.241 "is_configured": true, 00:21:24.241 "data_offset": 2048, 00:21:24.241 "data_size": 63488 00:21:24.241 } 00:21:24.241 ] 00:21:24.241 } 00:21:24.241 } 00:21:24.241 }' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:24.241 pt2 00:21:24.241 pt3' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.241 [2024-12-09 23:02:59.573696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 144920b3-6302-4a03-b696-f97f01fdb9cd '!=' 144920b3-6302-4a03-b696-f97f01fdb9cd ']' 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:24.241 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:24.242 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.242 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.242 [2024-12-09 23:02:59.597423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.501 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.501 "name": "raid_bdev1", 00:21:24.501 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:24.501 "strip_size_kb": 0, 00:21:24.502 "state": "online", 00:21:24.502 "raid_level": "raid1", 00:21:24.502 "superblock": true, 00:21:24.502 "num_base_bdevs": 3, 00:21:24.502 "num_base_bdevs_discovered": 2, 00:21:24.502 "num_base_bdevs_operational": 2, 00:21:24.502 "base_bdevs_list": [ 00:21:24.502 { 00:21:24.502 "name": null, 00:21:24.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.502 "is_configured": false, 00:21:24.502 "data_offset": 0, 00:21:24.502 "data_size": 63488 00:21:24.502 }, 00:21:24.502 { 00:21:24.502 "name": "pt2", 00:21:24.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.502 "is_configured": true, 00:21:24.502 "data_offset": 2048, 00:21:24.502 "data_size": 63488 00:21:24.502 }, 00:21:24.502 { 00:21:24.502 "name": "pt3", 00:21:24.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.502 "is_configured": true, 00:21:24.502 "data_offset": 2048, 00:21:24.502 "data_size": 63488 00:21:24.502 } 00:21:24.502 ] 00:21:24.502 }' 00:21:24.502 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.502 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 [2024-12-09 23:02:59.925479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.763 [2024-12-09 23:02:59.925520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.763 [2024-12-09 23:02:59.925610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.763 [2024-12-09 23:02:59.925679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.763 [2024-12-09 23:02:59.925694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 [2024-12-09 23:02:59.989439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:24.763 [2024-12-09 23:02:59.989503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.763 [2024-12-09 23:02:59.989520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:24.763 [2024-12-09 23:02:59.989570] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.763 [2024-12-09 23:02:59.992158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.763 [2024-12-09 23:02:59.992202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:24.763 [2024-12-09 23:02:59.992293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:24.763 [2024-12-09 23:02:59.992347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.763 pt2 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.763 23:02:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.763 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.763 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.763 "name": "raid_bdev1", 00:21:24.763 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:24.763 "strip_size_kb": 0, 00:21:24.763 "state": "configuring", 00:21:24.763 "raid_level": "raid1", 00:21:24.763 "superblock": true, 00:21:24.763 "num_base_bdevs": 3, 00:21:24.763 "num_base_bdevs_discovered": 1, 00:21:24.763 "num_base_bdevs_operational": 2, 00:21:24.763 "base_bdevs_list": [ 00:21:24.763 { 00:21:24.763 "name": null, 00:21:24.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.764 "is_configured": false, 00:21:24.764 "data_offset": 2048, 00:21:24.764 "data_size": 63488 00:21:24.764 }, 00:21:24.764 { 00:21:24.764 "name": "pt2", 00:21:24.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.764 "is_configured": true, 00:21:24.764 "data_offset": 2048, 00:21:24.764 "data_size": 63488 00:21:24.764 }, 00:21:24.764 { 00:21:24.764 "name": null, 00:21:24.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.764 "is_configured": false, 00:21:24.764 "data_offset": 2048, 00:21:24.764 "data_size": 63488 00:21:24.764 } 00:21:24.764 ] 00:21:24.764 }' 00:21:24.764 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.764 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.077 [2024-12-09 23:03:00.325580] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:25.077 [2024-12-09 23:03:00.325664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.077 [2024-12-09 23:03:00.325688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:25.077 [2024-12-09 23:03:00.325702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.077 [2024-12-09 23:03:00.326247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.077 [2024-12-09 23:03:00.326269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:25.077 [2024-12-09 23:03:00.326370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:25.077 [2024-12-09 23:03:00.326399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:25.077 [2024-12-09 23:03:00.326522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:25.077 [2024-12-09 23:03:00.326536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:25.077 [2024-12-09 23:03:00.326833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:25.077 [2024-12-09 23:03:00.327012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:25.077 [2024-12-09 23:03:00.327031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:25.077 [2024-12-09 23:03:00.327200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.077 pt3 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.077 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.077 "name": "raid_bdev1", 00:21:25.077 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:25.077 "strip_size_kb": 0, 00:21:25.077 "state": "online", 00:21:25.077 "raid_level": "raid1", 00:21:25.077 "superblock": true, 00:21:25.077 "num_base_bdevs": 3, 00:21:25.077 "num_base_bdevs_discovered": 2, 00:21:25.078 "num_base_bdevs_operational": 2, 00:21:25.078 "base_bdevs_list": [ 00:21:25.078 { 00:21:25.078 "name": null, 00:21:25.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.078 "is_configured": false, 00:21:25.078 "data_offset": 2048, 00:21:25.078 "data_size": 63488 00:21:25.078 }, 00:21:25.078 { 00:21:25.078 "name": "pt2", 00:21:25.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.078 "is_configured": true, 00:21:25.078 "data_offset": 2048, 00:21:25.078 "data_size": 63488 00:21:25.078 }, 00:21:25.078 { 00:21:25.078 "name": "pt3", 00:21:25.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:25.078 "is_configured": true, 00:21:25.078 "data_offset": 2048, 00:21:25.078 "data_size": 63488 00:21:25.078 } 00:21:25.078 ] 00:21:25.078 }' 00:21:25.078 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.078 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.396 [2024-12-09 23:03:00.653626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.396 [2024-12-09 23:03:00.653670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.396 [2024-12-09 23:03:00.653760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.396 [2024-12-09 23:03:00.653835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.396 [2024-12-09 23:03:00.653845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.396 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.396 [2024-12-09 23:03:00.705665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:25.396 [2024-12-09 23:03:00.705733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.396 [2024-12-09 23:03:00.705753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:25.396 [2024-12-09 23:03:00.705763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.396 [2024-12-09 23:03:00.708411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.396 [2024-12-09 23:03:00.708453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:25.396 [2024-12-09 23:03:00.708546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:25.396 [2024-12-09 23:03:00.708597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:25.396 [2024-12-09 23:03:00.708757] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:25.396 [2024-12-09 23:03:00.708769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.396 [2024-12-09 23:03:00.708788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:25.397 [2024-12-09 23:03:00.708843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:25.397 pt1 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.397 "name": "raid_bdev1", 00:21:25.397 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:25.397 "strip_size_kb": 0, 00:21:25.397 "state": "configuring", 00:21:25.397 "raid_level": "raid1", 00:21:25.397 "superblock": true, 00:21:25.397 "num_base_bdevs": 3, 00:21:25.397 "num_base_bdevs_discovered": 1, 00:21:25.397 "num_base_bdevs_operational": 2, 00:21:25.397 "base_bdevs_list": [ 00:21:25.397 { 00:21:25.397 "name": null, 00:21:25.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.397 "is_configured": false, 00:21:25.397 "data_offset": 2048, 00:21:25.397 "data_size": 63488 00:21:25.397 }, 00:21:25.397 { 00:21:25.397 "name": "pt2", 00:21:25.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.397 "is_configured": true, 00:21:25.397 "data_offset": 2048, 00:21:25.397 "data_size": 63488 00:21:25.397 }, 00:21:25.397 { 00:21:25.397 "name": null, 00:21:25.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:25.397 "is_configured": false, 00:21:25.397 "data_offset": 2048, 00:21:25.397 "data_size": 63488 00:21:25.397 } 00:21:25.397 ] 00:21:25.397 }' 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.397 23:03:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 [2024-12-09 23:03:01.085778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:25.970 [2024-12-09 23:03:01.085863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.970 [2024-12-09 23:03:01.085886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:25.970 [2024-12-09 23:03:01.085896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.970 [2024-12-09 23:03:01.086436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.970 [2024-12-09 23:03:01.086455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:25.970 [2024-12-09 23:03:01.086549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:25.970 [2024-12-09 23:03:01.086573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:25.971 [2024-12-09 23:03:01.086702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:25.971 [2024-12-09 23:03:01.086711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:25.971 [2024-12-09 23:03:01.086987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:25.971 [2024-12-09 23:03:01.087177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:25.971 [2024-12-09 23:03:01.087203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:25.971 [2024-12-09 23:03:01.087349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.971 pt3 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.971 "name": "raid_bdev1", 00:21:25.971 "uuid": "144920b3-6302-4a03-b696-f97f01fdb9cd", 00:21:25.971 "strip_size_kb": 0, 00:21:25.971 "state": "online", 00:21:25.971 "raid_level": "raid1", 00:21:25.971 "superblock": true, 00:21:25.971 "num_base_bdevs": 3, 00:21:25.971 "num_base_bdevs_discovered": 2, 00:21:25.971 "num_base_bdevs_operational": 2, 00:21:25.971 "base_bdevs_list": [ 00:21:25.971 { 00:21:25.971 "name": null, 00:21:25.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.971 "is_configured": false, 00:21:25.971 "data_offset": 2048, 00:21:25.971 "data_size": 63488 00:21:25.971 }, 00:21:25.971 { 00:21:25.971 "name": "pt2", 00:21:25.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:25.971 "is_configured": true, 00:21:25.971 "data_offset": 2048, 00:21:25.971 "data_size": 63488 00:21:25.971 }, 00:21:25.971 { 00:21:25.971 "name": "pt3", 00:21:25.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:25.971 "is_configured": true, 00:21:25.971 "data_offset": 2048, 00:21:25.971 "data_size": 63488 00:21:25.971 } 00:21:25.971 ] 00:21:25.971 }' 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.971 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.233 [2024-12-09 23:03:01.446176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 144920b3-6302-4a03-b696-f97f01fdb9cd '!=' 144920b3-6302-4a03-b696-f97f01fdb9cd ']' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66959 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66959 ']' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66959 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66959 00:21:26.233 killing process with pid 66959 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66959' 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66959 00:21:26.233 [2024-12-09 23:03:01.496169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:26.233 23:03:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66959 00:21:26.233 [2024-12-09 23:03:01.496281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.233 [2024-12-09 23:03:01.496356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:26.233 [2024-12-09 23:03:01.496369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:26.495 [2024-12-09 23:03:01.710493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.438 23:03:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:27.438 00:21:27.438 real 0m6.072s 00:21:27.438 user 0m9.235s 00:21:27.438 sys 0m1.161s 00:21:27.438 23:03:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.438 ************************************ 00:21:27.438 END TEST raid_superblock_test 00:21:27.438 ************************************ 00:21:27.438 23:03:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.438 23:03:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:27.438 23:03:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:27.438 23:03:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.438 23:03:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.438 ************************************ 00:21:27.438 START TEST raid_read_error_test 00:21:27.438 ************************************ 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:27.438 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.INVzkRGPAd 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67387 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67387 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67387 ']' 00:21:27.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.439 23:03:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.439 [2024-12-09 23:03:02.671962] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:27.439 [2024-12-09 23:03:02.672158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67387 ] 00:21:27.699 [2024-12-09 23:03:02.837120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.699 [2024-12-09 23:03:02.979305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.961 [2024-12-09 23:03:03.148512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.961 [2024-12-09 23:03:03.148838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.303 BaseBdev1_malloc 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.303 true 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.303 [2024-12-09 23:03:03.606633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:28.303 [2024-12-09 23:03:03.606879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.303 [2024-12-09 23:03:03.606913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:28.303 [2024-12-09 23:03:03.606927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.303 [2024-12-09 23:03:03.609509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.303 [2024-12-09 23:03:03.609569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.303 BaseBdev1 00:21:28.303 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.304 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:28.304 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:28.304 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.304 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 BaseBdev2_malloc 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 true 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 [2024-12-09 23:03:03.656701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:28.564 [2024-12-09 23:03:03.656791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.564 [2024-12-09 23:03:03.656810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:28.564 [2024-12-09 23:03:03.656821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.564 [2024-12-09 23:03:03.659263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.564 [2024-12-09 23:03:03.659319] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.564 BaseBdev2 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 BaseBdev3_malloc 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 true 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 [2024-12-09 23:03:03.731918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:28.564 [2024-12-09 23:03:03.731999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.564 [2024-12-09 23:03:03.732022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:28.564 [2024-12-09 23:03:03.732035] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.564 [2024-12-09 23:03:03.734631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.564 [2024-12-09 23:03:03.734689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:28.564 BaseBdev3 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 [2024-12-09 23:03:03.744014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.564 [2024-12-09 23:03:03.746419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.564 [2024-12-09 23:03:03.746516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.564 [2024-12-09 23:03:03.746757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:28.564 [2024-12-09 23:03:03.746769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:28.564 [2024-12-09 23:03:03.747086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:21:28.564 [2024-12-09 23:03:03.747308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:28.564 [2024-12-09 23:03:03.747320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:28.564 [2024-12-09 23:03:03.747494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.564 "name": "raid_bdev1", 00:21:28.564 "uuid": "4dc93dab-c81c-4f41-b19c-56db0e8e3739", 00:21:28.564 "strip_size_kb": 0, 00:21:28.564 "state": "online", 00:21:28.564 "raid_level": "raid1", 00:21:28.564 "superblock": true, 00:21:28.564 "num_base_bdevs": 3, 00:21:28.564 "num_base_bdevs_discovered": 3, 00:21:28.564 "num_base_bdevs_operational": 3, 00:21:28.564 "base_bdevs_list": [ 00:21:28.564 { 00:21:28.564 "name": "BaseBdev1", 00:21:28.564 "uuid": "1119c8a3-5a50-51a4-9a9b-365d88cd06f0", 00:21:28.564 "is_configured": true, 00:21:28.564 "data_offset": 2048, 00:21:28.564 "data_size": 63488 00:21:28.564 }, 00:21:28.564 { 00:21:28.564 "name": "BaseBdev2", 00:21:28.564 "uuid": "b58160b4-af4b-5cb5-a9e1-50478fef54fc", 00:21:28.564 "is_configured": true, 00:21:28.564 "data_offset": 2048, 00:21:28.564 "data_size": 63488 00:21:28.564 }, 00:21:28.564 { 00:21:28.564 "name": "BaseBdev3", 00:21:28.564 "uuid": "72f7d53b-7f46-54dd-8983-7014387b374b", 00:21:28.564 "is_configured": true, 00:21:28.564 "data_offset": 2048, 00:21:28.564 "data_size": 63488 00:21:28.564 } 00:21:28.564 ] 00:21:28.564 }' 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.564 23:03:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.825 23:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:28.825 23:03:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:28.825 [2024-12-09 23:03:04.177261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.767 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.768 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.028 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.028 "name": "raid_bdev1", 00:21:30.028 "uuid": "4dc93dab-c81c-4f41-b19c-56db0e8e3739", 00:21:30.028 "strip_size_kb": 0, 00:21:30.028 "state": "online", 00:21:30.028 "raid_level": "raid1", 00:21:30.028 "superblock": true, 00:21:30.028 "num_base_bdevs": 3, 00:21:30.028 "num_base_bdevs_discovered": 3, 00:21:30.028 "num_base_bdevs_operational": 3, 00:21:30.028 "base_bdevs_list": [ 00:21:30.028 { 00:21:30.028 "name": "BaseBdev1", 00:21:30.028 "uuid": "1119c8a3-5a50-51a4-9a9b-365d88cd06f0", 00:21:30.028 "is_configured": true, 00:21:30.028 "data_offset": 2048, 00:21:30.028 "data_size": 63488 00:21:30.028 }, 00:21:30.028 { 00:21:30.028 "name": "BaseBdev2", 00:21:30.028 "uuid": "b58160b4-af4b-5cb5-a9e1-50478fef54fc", 00:21:30.028 "is_configured": true, 00:21:30.028 "data_offset": 2048, 00:21:30.028 "data_size": 63488 00:21:30.028 }, 00:21:30.028 { 00:21:30.028 "name": "BaseBdev3", 00:21:30.028 "uuid": "72f7d53b-7f46-54dd-8983-7014387b374b", 00:21:30.028 "is_configured": true, 00:21:30.028 "data_offset": 2048, 00:21:30.028 "data_size": 63488 00:21:30.028 } 00:21:30.028 ] 00:21:30.028 }' 00:21:30.028 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.028 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.290 [2024-12-09 23:03:05.413804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:30.290 [2024-12-09 23:03:05.413843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.290 [2024-12-09 23:03:05.417185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.290 [2024-12-09 23:03:05.417247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.290 [2024-12-09 23:03:05.417360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.290 [2024-12-09 23:03:05.417371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:30.290 { 00:21:30.290 "results": [ 00:21:30.290 { 00:21:30.290 "job": "raid_bdev1", 00:21:30.290 "core_mask": "0x1", 00:21:30.290 "workload": "randrw", 00:21:30.290 "percentage": 50, 00:21:30.290 "status": "finished", 00:21:30.290 "queue_depth": 1, 00:21:30.290 "io_size": 131072, 00:21:30.290 "runtime": 1.234565, 00:21:30.290 "iops": 10157.423869946095, 00:21:30.290 "mibps": 1269.6779837432618, 00:21:30.290 "io_failed": 0, 00:21:30.290 "io_timeout": 0, 00:21:30.290 "avg_latency_us": 95.06562826647037, 00:21:30.290 "min_latency_us": 30.916923076923077, 00:21:30.290 "max_latency_us": 1814.843076923077 00:21:30.290 } 00:21:30.290 ], 00:21:30.290 "core_count": 1 00:21:30.290 } 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67387 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67387 ']' 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67387 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67387 00:21:30.290 killing process with pid 67387 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.290 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.291 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67387' 00:21:30.291 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67387 00:21:30.291 23:03:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67387 00:21:30.291 [2024-12-09 23:03:05.448196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.291 [2024-12-09 23:03:05.610438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.INVzkRGPAd 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:31.248 ************************************ 00:21:31.248 END TEST raid_read_error_test 00:21:31.248 ************************************ 00:21:31.248 00:21:31.248 real 0m3.907s 00:21:31.248 user 0m4.519s 00:21:31.248 sys 0m0.517s 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.248 23:03:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.248 23:03:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:31.248 23:03:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:31.248 23:03:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.248 23:03:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.248 ************************************ 00:21:31.248 START TEST raid_write_error_test 00:21:31.248 ************************************ 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ir3sZKANCu 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67517 00:21:31.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67517 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67517 ']' 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:31.248 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.249 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.249 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.249 23:03:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.547 [2024-12-09 23:03:06.650391] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:31.547 [2024-12-09 23:03:06.650952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67517 ] 00:21:31.547 [2024-12-09 23:03:06.827775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.810 [2024-12-09 23:03:06.971139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.810 [2024-12-09 23:03:07.141580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.810 [2024-12-09 23:03:07.141873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 BaseBdev1_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 true 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 [2024-12-09 23:03:07.568941] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:32.386 [2024-12-09 23:03:07.569211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.386 [2024-12-09 23:03:07.569247] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:32.386 [2024-12-09 23:03:07.569260] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.386 [2024-12-09 23:03:07.571904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.386 [2024-12-09 23:03:07.571968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:32.386 BaseBdev1 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 BaseBdev2_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 true 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 [2024-12-09 23:03:07.619957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:32.386 [2024-12-09 23:03:07.620224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.386 [2024-12-09 23:03:07.620256] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:32.386 [2024-12-09 23:03:07.620273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.386 [2024-12-09 23:03:07.622905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.386 [2024-12-09 23:03:07.622963] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:32.386 BaseBdev2 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 BaseBdev3_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 true 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 [2024-12-09 23:03:07.691592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:32.386 [2024-12-09 23:03:07.691856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.386 [2024-12-09 23:03:07.691893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:32.386 [2024-12-09 23:03:07.691905] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.386 [2024-12-09 23:03:07.694606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.386 [2024-12-09 23:03:07.694664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:32.386 BaseBdev3 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 [2024-12-09 23:03:07.699668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.386 [2024-12-09 23:03:07.701912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.386 [2024-12-09 23:03:07.702011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.386 [2024-12-09 23:03:07.702284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:32.386 [2024-12-09 23:03:07.702296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:32.386 [2024-12-09 23:03:07.702635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:21:32.386 [2024-12-09 23:03:07.702822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:32.386 [2024-12-09 23:03:07.702835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:32.386 [2024-12-09 23:03:07.703016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.386 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.386 "name": "raid_bdev1", 00:21:32.386 "uuid": "f3831e1a-40a2-4b01-9e1a-26af1c92b739", 00:21:32.386 "strip_size_kb": 0, 00:21:32.386 "state": "online", 00:21:32.386 "raid_level": "raid1", 00:21:32.386 "superblock": true, 00:21:32.386 "num_base_bdevs": 3, 00:21:32.386 "num_base_bdevs_discovered": 3, 00:21:32.386 "num_base_bdevs_operational": 3, 00:21:32.387 "base_bdevs_list": [ 00:21:32.387 { 00:21:32.387 "name": "BaseBdev1", 00:21:32.387 "uuid": "530e3b5e-7b51-50fd-92af-8848376590f7", 00:21:32.387 "is_configured": true, 00:21:32.387 "data_offset": 2048, 00:21:32.387 "data_size": 63488 00:21:32.387 }, 00:21:32.387 { 00:21:32.387 "name": "BaseBdev2", 00:21:32.387 "uuid": "3a095e35-0a61-5416-95e5-4eb37a829e0b", 00:21:32.387 "is_configured": true, 00:21:32.387 "data_offset": 2048, 00:21:32.387 "data_size": 63488 00:21:32.387 }, 00:21:32.387 { 00:21:32.387 "name": "BaseBdev3", 00:21:32.387 "uuid": "7c4b6a19-36f6-5e08-a186-862dcbcb8121", 00:21:32.387 "is_configured": true, 00:21:32.387 "data_offset": 2048, 00:21:32.387 "data_size": 63488 00:21:32.387 } 00:21:32.387 ] 00:21:32.387 }' 00:21:32.387 23:03:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.387 23:03:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.961 23:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:32.961 23:03:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:32.961 [2024-12-09 23:03:08.128911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:33.906 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:33.906 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.907 [2024-12-09 23:03:09.040763] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:33.907 [2024-12-09 23:03:09.040841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:33.907 [2024-12-09 23:03:09.041085] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.907 "name": "raid_bdev1", 00:21:33.907 "uuid": "f3831e1a-40a2-4b01-9e1a-26af1c92b739", 00:21:33.907 "strip_size_kb": 0, 00:21:33.907 "state": "online", 00:21:33.907 "raid_level": "raid1", 00:21:33.907 "superblock": true, 00:21:33.907 "num_base_bdevs": 3, 00:21:33.907 "num_base_bdevs_discovered": 2, 00:21:33.907 "num_base_bdevs_operational": 2, 00:21:33.907 "base_bdevs_list": [ 00:21:33.907 { 00:21:33.907 "name": null, 00:21:33.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.907 "is_configured": false, 00:21:33.907 "data_offset": 0, 00:21:33.907 "data_size": 63488 00:21:33.907 }, 00:21:33.907 { 00:21:33.907 "name": "BaseBdev2", 00:21:33.907 "uuid": "3a095e35-0a61-5416-95e5-4eb37a829e0b", 00:21:33.907 "is_configured": true, 00:21:33.907 "data_offset": 2048, 00:21:33.907 "data_size": 63488 00:21:33.907 }, 00:21:33.907 { 00:21:33.907 "name": "BaseBdev3", 00:21:33.907 "uuid": "7c4b6a19-36f6-5e08-a186-862dcbcb8121", 00:21:33.907 "is_configured": true, 00:21:33.907 "data_offset": 2048, 00:21:33.907 "data_size": 63488 00:21:33.907 } 00:21:33.907 ] 00:21:33.907 }' 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.907 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.204 [2024-12-09 23:03:09.412427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.204 [2024-12-09 23:03:09.412640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.204 [2024-12-09 23:03:09.415902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.204 [2024-12-09 23:03:09.415969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.204 [2024-12-09 23:03:09.416062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.204 [2024-12-09 23:03:09.416079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:34.204 { 00:21:34.204 "results": [ 00:21:34.204 { 00:21:34.204 "job": "raid_bdev1", 00:21:34.204 "core_mask": "0x1", 00:21:34.204 "workload": "randrw", 00:21:34.204 "percentage": 50, 00:21:34.204 "status": "finished", 00:21:34.204 "queue_depth": 1, 00:21:34.204 "io_size": 131072, 00:21:34.204 "runtime": 1.281626, 00:21:34.204 "iops": 11252.112550775344, 00:21:34.204 "mibps": 1406.514068846918, 00:21:34.204 "io_failed": 0, 00:21:34.204 "io_timeout": 0, 00:21:34.204 "avg_latency_us": 85.46873245747388, 00:21:34.204 "min_latency_us": 30.523076923076925, 00:21:34.204 "max_latency_us": 1802.24 00:21:34.204 } 00:21:34.204 ], 00:21:34.204 "core_count": 1 00:21:34.204 } 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67517 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67517 ']' 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67517 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67517 00:21:34.204 killing process with pid 67517 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67517' 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67517 00:21:34.204 [2024-12-09 23:03:09.444781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.204 23:03:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67517 00:21:34.491 [2024-12-09 23:03:09.608215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ir3sZKANCu 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:35.433 00:21:35.433 real 0m3.925s 00:21:35.433 user 0m4.566s 00:21:35.433 sys 0m0.528s 00:21:35.433 ************************************ 00:21:35.433 END TEST raid_write_error_test 00:21:35.433 ************************************ 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.433 23:03:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.433 23:03:10 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:21:35.433 23:03:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:35.433 23:03:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:35.433 23:03:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:35.433 23:03:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.433 23:03:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.433 ************************************ 00:21:35.433 START TEST raid_state_function_test 00:21:35.433 ************************************ 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:35.433 Process raid pid: 67655 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67655 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67655' 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67655 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67655 ']' 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:35.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.433 23:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.433 [2024-12-09 23:03:10.643150] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:35.433 [2024-12-09 23:03:10.643549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.695 [2024-12-09 23:03:10.827579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.695 [2024-12-09 23:03:11.003079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.954 [2024-12-09 23:03:11.178025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.954 [2024-12-09 23:03:11.178080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.215 [2024-12-09 23:03:11.559620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:36.215 [2024-12-09 23:03:11.559707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:36.215 [2024-12-09 23:03:11.559720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.215 [2024-12-09 23:03:11.559734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.215 [2024-12-09 23:03:11.559742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.215 [2024-12-09 23:03:11.559752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.215 [2024-12-09 23:03:11.559759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.215 [2024-12-09 23:03:11.559770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.215 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.476 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.476 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.476 "name": "Existed_Raid", 00:21:36.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.476 "strip_size_kb": 64, 00:21:36.476 "state": "configuring", 00:21:36.476 "raid_level": "raid0", 00:21:36.476 "superblock": false, 00:21:36.476 "num_base_bdevs": 4, 00:21:36.476 "num_base_bdevs_discovered": 0, 00:21:36.476 "num_base_bdevs_operational": 4, 00:21:36.476 "base_bdevs_list": [ 00:21:36.476 { 00:21:36.476 "name": "BaseBdev1", 00:21:36.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.476 "is_configured": false, 00:21:36.476 "data_offset": 0, 00:21:36.476 "data_size": 0 00:21:36.476 }, 00:21:36.476 { 00:21:36.476 "name": "BaseBdev2", 00:21:36.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.476 "is_configured": false, 00:21:36.476 "data_offset": 0, 00:21:36.476 "data_size": 0 00:21:36.476 }, 00:21:36.476 { 00:21:36.476 "name": "BaseBdev3", 00:21:36.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.476 "is_configured": false, 00:21:36.476 "data_offset": 0, 00:21:36.476 "data_size": 0 00:21:36.476 }, 00:21:36.476 { 00:21:36.476 "name": "BaseBdev4", 00:21:36.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.476 "is_configured": false, 00:21:36.476 "data_offset": 0, 00:21:36.476 "data_size": 0 00:21:36.476 } 00:21:36.476 ] 00:21:36.476 }' 00:21:36.477 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.477 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.738 [2024-12-09 23:03:11.899639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:36.738 [2024-12-09 23:03:11.899692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.738 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.738 [2024-12-09 23:03:11.911672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:36.738 [2024-12-09 23:03:11.911762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:36.738 [2024-12-09 23:03:11.911779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.738 [2024-12-09 23:03:11.911794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.738 [2024-12-09 23:03:11.911805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.738 [2024-12-09 23:03:11.911819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.738 [2024-12-09 23:03:11.911829] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.739 [2024-12-09 23:03:11.911843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.739 [2024-12-09 23:03:11.950847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.739 BaseBdev1 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.739 [ 00:21:36.739 { 00:21:36.739 "name": "BaseBdev1", 00:21:36.739 "aliases": [ 00:21:36.739 "25d0eec8-f484-4100-ac54-e2089ae4ffa5" 00:21:36.739 ], 00:21:36.739 "product_name": "Malloc disk", 00:21:36.739 "block_size": 512, 00:21:36.739 "num_blocks": 65536, 00:21:36.739 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:36.739 "assigned_rate_limits": { 00:21:36.739 "rw_ios_per_sec": 0, 00:21:36.739 "rw_mbytes_per_sec": 0, 00:21:36.739 "r_mbytes_per_sec": 0, 00:21:36.739 "w_mbytes_per_sec": 0 00:21:36.739 }, 00:21:36.739 "claimed": true, 00:21:36.739 "claim_type": "exclusive_write", 00:21:36.739 "zoned": false, 00:21:36.739 "supported_io_types": { 00:21:36.739 "read": true, 00:21:36.739 "write": true, 00:21:36.739 "unmap": true, 00:21:36.739 "flush": true, 00:21:36.739 "reset": true, 00:21:36.739 "nvme_admin": false, 00:21:36.739 "nvme_io": false, 00:21:36.739 "nvme_io_md": false, 00:21:36.739 "write_zeroes": true, 00:21:36.739 "zcopy": true, 00:21:36.739 "get_zone_info": false, 00:21:36.739 "zone_management": false, 00:21:36.739 "zone_append": false, 00:21:36.739 "compare": false, 00:21:36.739 "compare_and_write": false, 00:21:36.739 "abort": true, 00:21:36.739 "seek_hole": false, 00:21:36.739 "seek_data": false, 00:21:36.739 "copy": true, 00:21:36.739 "nvme_iov_md": false 00:21:36.739 }, 00:21:36.739 "memory_domains": [ 00:21:36.739 { 00:21:36.739 "dma_device_id": "system", 00:21:36.739 "dma_device_type": 1 00:21:36.739 }, 00:21:36.739 { 00:21:36.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.739 "dma_device_type": 2 00:21:36.739 } 00:21:36.739 ], 00:21:36.739 "driver_specific": {} 00:21:36.739 } 00:21:36.739 ] 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.739 23:03:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.739 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.739 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.739 "name": "Existed_Raid", 00:21:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.739 "strip_size_kb": 64, 00:21:36.739 "state": "configuring", 00:21:36.739 "raid_level": "raid0", 00:21:36.739 "superblock": false, 00:21:36.739 "num_base_bdevs": 4, 00:21:36.739 "num_base_bdevs_discovered": 1, 00:21:36.739 "num_base_bdevs_operational": 4, 00:21:36.739 "base_bdevs_list": [ 00:21:36.739 { 00:21:36.739 "name": "BaseBdev1", 00:21:36.739 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:36.739 "is_configured": true, 00:21:36.739 "data_offset": 0, 00:21:36.739 "data_size": 65536 00:21:36.739 }, 00:21:36.739 { 00:21:36.739 "name": "BaseBdev2", 00:21:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.739 "is_configured": false, 00:21:36.739 "data_offset": 0, 00:21:36.739 "data_size": 0 00:21:36.739 }, 00:21:36.739 { 00:21:36.739 "name": "BaseBdev3", 00:21:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.739 "is_configured": false, 00:21:36.739 "data_offset": 0, 00:21:36.739 "data_size": 0 00:21:36.739 }, 00:21:36.739 { 00:21:36.739 "name": "BaseBdev4", 00:21:36.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.739 "is_configured": false, 00:21:36.739 "data_offset": 0, 00:21:36.739 "data_size": 0 00:21:36.739 } 00:21:36.739 ] 00:21:36.739 }' 00:21:36.739 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.739 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.001 [2024-12-09 23:03:12.351017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.001 [2024-12-09 23:03:12.351086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.001 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 [2024-12-09 23:03:12.363085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.263 [2024-12-09 23:03:12.365322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.263 [2024-12-09 23:03:12.365552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.263 [2024-12-09 23:03:12.365573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.263 [2024-12-09 23:03:12.365587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.263 [2024-12-09 23:03:12.365594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.263 [2024-12-09 23:03:12.365604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.263 "name": "Existed_Raid", 00:21:37.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.263 "strip_size_kb": 64, 00:21:37.263 "state": "configuring", 00:21:37.263 "raid_level": "raid0", 00:21:37.263 "superblock": false, 00:21:37.263 "num_base_bdevs": 4, 00:21:37.263 "num_base_bdevs_discovered": 1, 00:21:37.263 "num_base_bdevs_operational": 4, 00:21:37.263 "base_bdevs_list": [ 00:21:37.263 { 00:21:37.263 "name": "BaseBdev1", 00:21:37.263 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:37.263 "is_configured": true, 00:21:37.263 "data_offset": 0, 00:21:37.263 "data_size": 65536 00:21:37.263 }, 00:21:37.263 { 00:21:37.263 "name": "BaseBdev2", 00:21:37.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.263 "is_configured": false, 00:21:37.263 "data_offset": 0, 00:21:37.263 "data_size": 0 00:21:37.263 }, 00:21:37.263 { 00:21:37.263 "name": "BaseBdev3", 00:21:37.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.263 "is_configured": false, 00:21:37.263 "data_offset": 0, 00:21:37.263 "data_size": 0 00:21:37.263 }, 00:21:37.263 { 00:21:37.263 "name": "BaseBdev4", 00:21:37.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.263 "is_configured": false, 00:21:37.263 "data_offset": 0, 00:21:37.263 "data_size": 0 00:21:37.263 } 00:21:37.263 ] 00:21:37.263 }' 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.263 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.524 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 [2024-12-09 23:03:12.770401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.525 BaseBdev2 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 [ 00:21:37.525 { 00:21:37.525 "name": "BaseBdev2", 00:21:37.525 "aliases": [ 00:21:37.525 "394b769e-2dec-4e64-b704-4e9cdf6bb808" 00:21:37.525 ], 00:21:37.525 "product_name": "Malloc disk", 00:21:37.525 "block_size": 512, 00:21:37.525 "num_blocks": 65536, 00:21:37.525 "uuid": "394b769e-2dec-4e64-b704-4e9cdf6bb808", 00:21:37.525 "assigned_rate_limits": { 00:21:37.525 "rw_ios_per_sec": 0, 00:21:37.525 "rw_mbytes_per_sec": 0, 00:21:37.525 "r_mbytes_per_sec": 0, 00:21:37.525 "w_mbytes_per_sec": 0 00:21:37.525 }, 00:21:37.525 "claimed": true, 00:21:37.525 "claim_type": "exclusive_write", 00:21:37.525 "zoned": false, 00:21:37.525 "supported_io_types": { 00:21:37.525 "read": true, 00:21:37.525 "write": true, 00:21:37.525 "unmap": true, 00:21:37.525 "flush": true, 00:21:37.525 "reset": true, 00:21:37.525 "nvme_admin": false, 00:21:37.525 "nvme_io": false, 00:21:37.525 "nvme_io_md": false, 00:21:37.525 "write_zeroes": true, 00:21:37.525 "zcopy": true, 00:21:37.525 "get_zone_info": false, 00:21:37.525 "zone_management": false, 00:21:37.525 "zone_append": false, 00:21:37.525 "compare": false, 00:21:37.525 "compare_and_write": false, 00:21:37.525 "abort": true, 00:21:37.525 "seek_hole": false, 00:21:37.525 "seek_data": false, 00:21:37.525 "copy": true, 00:21:37.525 "nvme_iov_md": false 00:21:37.525 }, 00:21:37.525 "memory_domains": [ 00:21:37.525 { 00:21:37.525 "dma_device_id": "system", 00:21:37.525 "dma_device_type": 1 00:21:37.525 }, 00:21:37.525 { 00:21:37.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.525 "dma_device_type": 2 00:21:37.525 } 00:21:37.525 ], 00:21:37.525 "driver_specific": {} 00:21:37.525 } 00:21:37.525 ] 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.525 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.525 "name": "Existed_Raid", 00:21:37.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.525 "strip_size_kb": 64, 00:21:37.525 "state": "configuring", 00:21:37.525 "raid_level": "raid0", 00:21:37.525 "superblock": false, 00:21:37.525 "num_base_bdevs": 4, 00:21:37.525 "num_base_bdevs_discovered": 2, 00:21:37.525 "num_base_bdevs_operational": 4, 00:21:37.525 "base_bdevs_list": [ 00:21:37.525 { 00:21:37.525 "name": "BaseBdev1", 00:21:37.525 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:37.525 "is_configured": true, 00:21:37.525 "data_offset": 0, 00:21:37.525 "data_size": 65536 00:21:37.525 }, 00:21:37.525 { 00:21:37.525 "name": "BaseBdev2", 00:21:37.525 "uuid": "394b769e-2dec-4e64-b704-4e9cdf6bb808", 00:21:37.525 "is_configured": true, 00:21:37.525 "data_offset": 0, 00:21:37.525 "data_size": 65536 00:21:37.525 }, 00:21:37.525 { 00:21:37.525 "name": "BaseBdev3", 00:21:37.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.525 "is_configured": false, 00:21:37.526 "data_offset": 0, 00:21:37.526 "data_size": 0 00:21:37.526 }, 00:21:37.526 { 00:21:37.526 "name": "BaseBdev4", 00:21:37.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.526 "is_configured": false, 00:21:37.526 "data_offset": 0, 00:21:37.526 "data_size": 0 00:21:37.526 } 00:21:37.526 ] 00:21:37.526 }' 00:21:37.526 23:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.526 23:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.787 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:37.787 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.787 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.049 [2024-12-09 23:03:13.170798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:38.049 BaseBdev3 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.049 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.049 [ 00:21:38.049 { 00:21:38.049 "name": "BaseBdev3", 00:21:38.049 "aliases": [ 00:21:38.049 "342ad13d-711c-4629-b907-99489c72de1b" 00:21:38.049 ], 00:21:38.049 "product_name": "Malloc disk", 00:21:38.049 "block_size": 512, 00:21:38.049 "num_blocks": 65536, 00:21:38.049 "uuid": "342ad13d-711c-4629-b907-99489c72de1b", 00:21:38.049 "assigned_rate_limits": { 00:21:38.049 "rw_ios_per_sec": 0, 00:21:38.049 "rw_mbytes_per_sec": 0, 00:21:38.049 "r_mbytes_per_sec": 0, 00:21:38.049 "w_mbytes_per_sec": 0 00:21:38.049 }, 00:21:38.049 "claimed": true, 00:21:38.049 "claim_type": "exclusive_write", 00:21:38.049 "zoned": false, 00:21:38.049 "supported_io_types": { 00:21:38.050 "read": true, 00:21:38.050 "write": true, 00:21:38.050 "unmap": true, 00:21:38.050 "flush": true, 00:21:38.050 "reset": true, 00:21:38.050 "nvme_admin": false, 00:21:38.050 "nvme_io": false, 00:21:38.050 "nvme_io_md": false, 00:21:38.050 "write_zeroes": true, 00:21:38.050 "zcopy": true, 00:21:38.050 "get_zone_info": false, 00:21:38.050 "zone_management": false, 00:21:38.050 "zone_append": false, 00:21:38.050 "compare": false, 00:21:38.050 "compare_and_write": false, 00:21:38.050 "abort": true, 00:21:38.050 "seek_hole": false, 00:21:38.050 "seek_data": false, 00:21:38.050 "copy": true, 00:21:38.050 "nvme_iov_md": false 00:21:38.050 }, 00:21:38.050 "memory_domains": [ 00:21:38.050 { 00:21:38.050 "dma_device_id": "system", 00:21:38.050 "dma_device_type": 1 00:21:38.050 }, 00:21:38.050 { 00:21:38.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.050 "dma_device_type": 2 00:21:38.050 } 00:21:38.050 ], 00:21:38.050 "driver_specific": {} 00:21:38.050 } 00:21:38.050 ] 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.050 "name": "Existed_Raid", 00:21:38.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.050 "strip_size_kb": 64, 00:21:38.050 "state": "configuring", 00:21:38.050 "raid_level": "raid0", 00:21:38.050 "superblock": false, 00:21:38.050 "num_base_bdevs": 4, 00:21:38.050 "num_base_bdevs_discovered": 3, 00:21:38.050 "num_base_bdevs_operational": 4, 00:21:38.050 "base_bdevs_list": [ 00:21:38.050 { 00:21:38.050 "name": "BaseBdev1", 00:21:38.050 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:38.050 "is_configured": true, 00:21:38.050 "data_offset": 0, 00:21:38.050 "data_size": 65536 00:21:38.050 }, 00:21:38.050 { 00:21:38.050 "name": "BaseBdev2", 00:21:38.050 "uuid": "394b769e-2dec-4e64-b704-4e9cdf6bb808", 00:21:38.050 "is_configured": true, 00:21:38.050 "data_offset": 0, 00:21:38.050 "data_size": 65536 00:21:38.050 }, 00:21:38.050 { 00:21:38.050 "name": "BaseBdev3", 00:21:38.050 "uuid": "342ad13d-711c-4629-b907-99489c72de1b", 00:21:38.050 "is_configured": true, 00:21:38.050 "data_offset": 0, 00:21:38.050 "data_size": 65536 00:21:38.050 }, 00:21:38.050 { 00:21:38.050 "name": "BaseBdev4", 00:21:38.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.050 "is_configured": false, 00:21:38.050 "data_offset": 0, 00:21:38.050 "data_size": 0 00:21:38.050 } 00:21:38.050 ] 00:21:38.050 }' 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.050 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 [2024-12-09 23:03:13.554867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:38.320 [2024-12-09 23:03:13.555184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:38.320 [2024-12-09 23:03:13.555209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:38.320 [2024-12-09 23:03:13.555548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.320 [2024-12-09 23:03:13.555732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:38.320 [2024-12-09 23:03:13.555745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:38.320 [2024-12-09 23:03:13.556051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.320 BaseBdev4 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 [ 00:21:38.320 { 00:21:38.320 "name": "BaseBdev4", 00:21:38.320 "aliases": [ 00:21:38.320 "10113ade-a26f-4b15-919b-b8c3b13d3ca3" 00:21:38.320 ], 00:21:38.320 "product_name": "Malloc disk", 00:21:38.320 "block_size": 512, 00:21:38.320 "num_blocks": 65536, 00:21:38.320 "uuid": "10113ade-a26f-4b15-919b-b8c3b13d3ca3", 00:21:38.320 "assigned_rate_limits": { 00:21:38.320 "rw_ios_per_sec": 0, 00:21:38.320 "rw_mbytes_per_sec": 0, 00:21:38.320 "r_mbytes_per_sec": 0, 00:21:38.320 "w_mbytes_per_sec": 0 00:21:38.320 }, 00:21:38.320 "claimed": true, 00:21:38.320 "claim_type": "exclusive_write", 00:21:38.320 "zoned": false, 00:21:38.320 "supported_io_types": { 00:21:38.320 "read": true, 00:21:38.320 "write": true, 00:21:38.320 "unmap": true, 00:21:38.320 "flush": true, 00:21:38.320 "reset": true, 00:21:38.320 "nvme_admin": false, 00:21:38.320 "nvme_io": false, 00:21:38.320 "nvme_io_md": false, 00:21:38.320 "write_zeroes": true, 00:21:38.320 "zcopy": true, 00:21:38.320 "get_zone_info": false, 00:21:38.320 "zone_management": false, 00:21:38.320 "zone_append": false, 00:21:38.320 "compare": false, 00:21:38.320 "compare_and_write": false, 00:21:38.320 "abort": true, 00:21:38.320 "seek_hole": false, 00:21:38.320 "seek_data": false, 00:21:38.320 "copy": true, 00:21:38.320 "nvme_iov_md": false 00:21:38.320 }, 00:21:38.320 "memory_domains": [ 00:21:38.320 { 00:21:38.320 "dma_device_id": "system", 00:21:38.320 "dma_device_type": 1 00:21:38.320 }, 00:21:38.320 { 00:21:38.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.320 "dma_device_type": 2 00:21:38.320 } 00:21:38.320 ], 00:21:38.320 "driver_specific": {} 00:21:38.320 } 00:21:38.320 ] 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.320 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.320 "name": "Existed_Raid", 00:21:38.320 "uuid": "33e00fe4-234f-4e64-bc0e-0c7a0f54e6f2", 00:21:38.320 "strip_size_kb": 64, 00:21:38.320 "state": "online", 00:21:38.320 "raid_level": "raid0", 00:21:38.321 "superblock": false, 00:21:38.321 "num_base_bdevs": 4, 00:21:38.321 "num_base_bdevs_discovered": 4, 00:21:38.321 "num_base_bdevs_operational": 4, 00:21:38.321 "base_bdevs_list": [ 00:21:38.321 { 00:21:38.321 "name": "BaseBdev1", 00:21:38.321 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:38.321 "is_configured": true, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 65536 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "name": "BaseBdev2", 00:21:38.321 "uuid": "394b769e-2dec-4e64-b704-4e9cdf6bb808", 00:21:38.321 "is_configured": true, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 65536 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "name": "BaseBdev3", 00:21:38.321 "uuid": "342ad13d-711c-4629-b907-99489c72de1b", 00:21:38.321 "is_configured": true, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 65536 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "name": "BaseBdev4", 00:21:38.321 "uuid": "10113ade-a26f-4b15-919b-b8c3b13d3ca3", 00:21:38.321 "is_configured": true, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 65536 00:21:38.321 } 00:21:38.321 ] 00:21:38.321 }' 00:21:38.321 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.321 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:38.582 [2024-12-09 23:03:13.907466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:38.582 "name": "Existed_Raid", 00:21:38.582 "aliases": [ 00:21:38.582 "33e00fe4-234f-4e64-bc0e-0c7a0f54e6f2" 00:21:38.582 ], 00:21:38.582 "product_name": "Raid Volume", 00:21:38.582 "block_size": 512, 00:21:38.582 "num_blocks": 262144, 00:21:38.582 "uuid": "33e00fe4-234f-4e64-bc0e-0c7a0f54e6f2", 00:21:38.582 "assigned_rate_limits": { 00:21:38.582 "rw_ios_per_sec": 0, 00:21:38.582 "rw_mbytes_per_sec": 0, 00:21:38.582 "r_mbytes_per_sec": 0, 00:21:38.582 "w_mbytes_per_sec": 0 00:21:38.582 }, 00:21:38.582 "claimed": false, 00:21:38.582 "zoned": false, 00:21:38.582 "supported_io_types": { 00:21:38.582 "read": true, 00:21:38.582 "write": true, 00:21:38.582 "unmap": true, 00:21:38.582 "flush": true, 00:21:38.582 "reset": true, 00:21:38.582 "nvme_admin": false, 00:21:38.582 "nvme_io": false, 00:21:38.582 "nvme_io_md": false, 00:21:38.582 "write_zeroes": true, 00:21:38.582 "zcopy": false, 00:21:38.582 "get_zone_info": false, 00:21:38.582 "zone_management": false, 00:21:38.582 "zone_append": false, 00:21:38.582 "compare": false, 00:21:38.582 "compare_and_write": false, 00:21:38.582 "abort": false, 00:21:38.582 "seek_hole": false, 00:21:38.582 "seek_data": false, 00:21:38.582 "copy": false, 00:21:38.582 "nvme_iov_md": false 00:21:38.582 }, 00:21:38.582 "memory_domains": [ 00:21:38.582 { 00:21:38.582 "dma_device_id": "system", 00:21:38.582 "dma_device_type": 1 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.582 "dma_device_type": 2 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "system", 00:21:38.582 "dma_device_type": 1 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.582 "dma_device_type": 2 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "system", 00:21:38.582 "dma_device_type": 1 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.582 "dma_device_type": 2 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "system", 00:21:38.582 "dma_device_type": 1 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.582 "dma_device_type": 2 00:21:38.582 } 00:21:38.582 ], 00:21:38.582 "driver_specific": { 00:21:38.582 "raid": { 00:21:38.582 "uuid": "33e00fe4-234f-4e64-bc0e-0c7a0f54e6f2", 00:21:38.582 "strip_size_kb": 64, 00:21:38.582 "state": "online", 00:21:38.582 "raid_level": "raid0", 00:21:38.582 "superblock": false, 00:21:38.582 "num_base_bdevs": 4, 00:21:38.582 "num_base_bdevs_discovered": 4, 00:21:38.582 "num_base_bdevs_operational": 4, 00:21:38.582 "base_bdevs_list": [ 00:21:38.582 { 00:21:38.582 "name": "BaseBdev1", 00:21:38.582 "uuid": "25d0eec8-f484-4100-ac54-e2089ae4ffa5", 00:21:38.582 "is_configured": true, 00:21:38.582 "data_offset": 0, 00:21:38.582 "data_size": 65536 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "name": "BaseBdev2", 00:21:38.582 "uuid": "394b769e-2dec-4e64-b704-4e9cdf6bb808", 00:21:38.582 "is_configured": true, 00:21:38.582 "data_offset": 0, 00:21:38.582 "data_size": 65536 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "name": "BaseBdev3", 00:21:38.582 "uuid": "342ad13d-711c-4629-b907-99489c72de1b", 00:21:38.582 "is_configured": true, 00:21:38.582 "data_offset": 0, 00:21:38.582 "data_size": 65536 00:21:38.582 }, 00:21:38.582 { 00:21:38.582 "name": "BaseBdev4", 00:21:38.582 "uuid": "10113ade-a26f-4b15-919b-b8c3b13d3ca3", 00:21:38.582 "is_configured": true, 00:21:38.582 "data_offset": 0, 00:21:38.582 "data_size": 65536 00:21:38.582 } 00:21:38.582 ] 00:21:38.582 } 00:21:38.582 } 00:21:38.582 }' 00:21:38.582 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:38.844 BaseBdev2 00:21:38.844 BaseBdev3 00:21:38.844 BaseBdev4' 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.844 23:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.844 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.844 [2024-12-09 23:03:14.147179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:38.844 [2024-12-09 23:03:14.147221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:38.844 [2024-12-09 23:03:14.147284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.104 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.104 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.105 "name": "Existed_Raid", 00:21:39.105 "uuid": "33e00fe4-234f-4e64-bc0e-0c7a0f54e6f2", 00:21:39.105 "strip_size_kb": 64, 00:21:39.105 "state": "offline", 00:21:39.105 "raid_level": "raid0", 00:21:39.105 "superblock": false, 00:21:39.105 "num_base_bdevs": 4, 00:21:39.105 "num_base_bdevs_discovered": 3, 00:21:39.105 "num_base_bdevs_operational": 3, 00:21:39.105 "base_bdevs_list": [ 00:21:39.105 { 00:21:39.105 "name": null, 00:21:39.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.105 "is_configured": false, 00:21:39.105 "data_offset": 0, 00:21:39.105 "data_size": 65536 00:21:39.105 }, 00:21:39.105 { 00:21:39.105 "name": "BaseBdev2", 00:21:39.105 "uuid": "394b769e-2dec-4e64-b704-4e9cdf6bb808", 00:21:39.105 "is_configured": true, 00:21:39.105 "data_offset": 0, 00:21:39.105 "data_size": 65536 00:21:39.105 }, 00:21:39.105 { 00:21:39.105 "name": "BaseBdev3", 00:21:39.105 "uuid": "342ad13d-711c-4629-b907-99489c72de1b", 00:21:39.105 "is_configured": true, 00:21:39.105 "data_offset": 0, 00:21:39.105 "data_size": 65536 00:21:39.105 }, 00:21:39.105 { 00:21:39.105 "name": "BaseBdev4", 00:21:39.105 "uuid": "10113ade-a26f-4b15-919b-b8c3b13d3ca3", 00:21:39.105 "is_configured": true, 00:21:39.105 "data_offset": 0, 00:21:39.105 "data_size": 65536 00:21:39.105 } 00:21:39.105 ] 00:21:39.105 }' 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.105 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.366 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.367 [2024-12-09 23:03:14.574609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.367 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.367 [2024-12-09 23:03:14.680858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 [2024-12-09 23:03:14.787456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:39.629 [2024-12-09 23:03:14.787519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 BaseBdev2 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.629 [ 00:21:39.629 { 00:21:39.629 "name": "BaseBdev2", 00:21:39.629 "aliases": [ 00:21:39.629 "a8055682-efbe-4a8e-8ec9-8c660794c4fe" 00:21:39.629 ], 00:21:39.629 "product_name": "Malloc disk", 00:21:39.629 "block_size": 512, 00:21:39.629 "num_blocks": 65536, 00:21:39.629 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:39.629 "assigned_rate_limits": { 00:21:39.629 "rw_ios_per_sec": 0, 00:21:39.629 "rw_mbytes_per_sec": 0, 00:21:39.629 "r_mbytes_per_sec": 0, 00:21:39.629 "w_mbytes_per_sec": 0 00:21:39.629 }, 00:21:39.629 "claimed": false, 00:21:39.629 "zoned": false, 00:21:39.629 "supported_io_types": { 00:21:39.629 "read": true, 00:21:39.629 "write": true, 00:21:39.629 "unmap": true, 00:21:39.629 "flush": true, 00:21:39.629 "reset": true, 00:21:39.629 "nvme_admin": false, 00:21:39.629 "nvme_io": false, 00:21:39.629 "nvme_io_md": false, 00:21:39.629 "write_zeroes": true, 00:21:39.629 "zcopy": true, 00:21:39.629 "get_zone_info": false, 00:21:39.629 "zone_management": false, 00:21:39.629 "zone_append": false, 00:21:39.629 "compare": false, 00:21:39.629 "compare_and_write": false, 00:21:39.629 "abort": true, 00:21:39.629 "seek_hole": false, 00:21:39.629 "seek_data": false, 00:21:39.629 "copy": true, 00:21:39.629 "nvme_iov_md": false 00:21:39.629 }, 00:21:39.629 "memory_domains": [ 00:21:39.629 { 00:21:39.629 "dma_device_id": "system", 00:21:39.629 "dma_device_type": 1 00:21:39.629 }, 00:21:39.629 { 00:21:39.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.629 "dma_device_type": 2 00:21:39.629 } 00:21:39.629 ], 00:21:39.629 "driver_specific": {} 00:21:39.629 } 00:21:39.629 ] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.629 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 BaseBdev3 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 [ 00:21:39.892 { 00:21:39.892 "name": "BaseBdev3", 00:21:39.892 "aliases": [ 00:21:39.892 "bb77524d-d385-49c1-b8ef-7b9b3e71e1da" 00:21:39.892 ], 00:21:39.892 "product_name": "Malloc disk", 00:21:39.892 "block_size": 512, 00:21:39.892 "num_blocks": 65536, 00:21:39.892 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:39.892 "assigned_rate_limits": { 00:21:39.892 "rw_ios_per_sec": 0, 00:21:39.892 "rw_mbytes_per_sec": 0, 00:21:39.892 "r_mbytes_per_sec": 0, 00:21:39.892 "w_mbytes_per_sec": 0 00:21:39.892 }, 00:21:39.892 "claimed": false, 00:21:39.892 "zoned": false, 00:21:39.892 "supported_io_types": { 00:21:39.892 "read": true, 00:21:39.892 "write": true, 00:21:39.892 "unmap": true, 00:21:39.892 "flush": true, 00:21:39.892 "reset": true, 00:21:39.892 "nvme_admin": false, 00:21:39.892 "nvme_io": false, 00:21:39.892 "nvme_io_md": false, 00:21:39.892 "write_zeroes": true, 00:21:39.892 "zcopy": true, 00:21:39.892 "get_zone_info": false, 00:21:39.892 "zone_management": false, 00:21:39.892 "zone_append": false, 00:21:39.892 "compare": false, 00:21:39.892 "compare_and_write": false, 00:21:39.892 "abort": true, 00:21:39.892 "seek_hole": false, 00:21:39.892 "seek_data": false, 00:21:39.892 "copy": true, 00:21:39.892 "nvme_iov_md": false 00:21:39.892 }, 00:21:39.892 "memory_domains": [ 00:21:39.892 { 00:21:39.892 "dma_device_id": "system", 00:21:39.892 "dma_device_type": 1 00:21:39.892 }, 00:21:39.892 { 00:21:39.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.892 "dma_device_type": 2 00:21:39.892 } 00:21:39.892 ], 00:21:39.892 "driver_specific": {} 00:21:39.892 } 00:21:39.892 ] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 BaseBdev4 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 [ 00:21:39.892 { 00:21:39.892 "name": "BaseBdev4", 00:21:39.892 "aliases": [ 00:21:39.892 "06991b49-5361-4628-972d-d57ef4bf1322" 00:21:39.892 ], 00:21:39.892 "product_name": "Malloc disk", 00:21:39.892 "block_size": 512, 00:21:39.892 "num_blocks": 65536, 00:21:39.892 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:39.892 "assigned_rate_limits": { 00:21:39.892 "rw_ios_per_sec": 0, 00:21:39.892 "rw_mbytes_per_sec": 0, 00:21:39.892 "r_mbytes_per_sec": 0, 00:21:39.892 "w_mbytes_per_sec": 0 00:21:39.892 }, 00:21:39.892 "claimed": false, 00:21:39.892 "zoned": false, 00:21:39.892 "supported_io_types": { 00:21:39.892 "read": true, 00:21:39.892 "write": true, 00:21:39.892 "unmap": true, 00:21:39.892 "flush": true, 00:21:39.892 "reset": true, 00:21:39.892 "nvme_admin": false, 00:21:39.892 "nvme_io": false, 00:21:39.892 "nvme_io_md": false, 00:21:39.892 "write_zeroes": true, 00:21:39.892 "zcopy": true, 00:21:39.892 "get_zone_info": false, 00:21:39.892 "zone_management": false, 00:21:39.892 "zone_append": false, 00:21:39.892 "compare": false, 00:21:39.892 "compare_and_write": false, 00:21:39.892 "abort": true, 00:21:39.892 "seek_hole": false, 00:21:39.892 "seek_data": false, 00:21:39.892 "copy": true, 00:21:39.892 "nvme_iov_md": false 00:21:39.892 }, 00:21:39.892 "memory_domains": [ 00:21:39.892 { 00:21:39.892 "dma_device_id": "system", 00:21:39.892 "dma_device_type": 1 00:21:39.892 }, 00:21:39.892 { 00:21:39.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.892 "dma_device_type": 2 00:21:39.892 } 00:21:39.892 ], 00:21:39.892 "driver_specific": {} 00:21:39.892 } 00:21:39.892 ] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 [2024-12-09 23:03:15.084534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:39.892 [2024-12-09 23:03:15.084770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:39.892 [2024-12-09 23:03:15.084813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.892 [2024-12-09 23:03:15.087015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.892 [2024-12-09 23:03:15.087083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.892 "name": "Existed_Raid", 00:21:39.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.892 "strip_size_kb": 64, 00:21:39.892 "state": "configuring", 00:21:39.892 "raid_level": "raid0", 00:21:39.892 "superblock": false, 00:21:39.892 "num_base_bdevs": 4, 00:21:39.892 "num_base_bdevs_discovered": 3, 00:21:39.892 "num_base_bdevs_operational": 4, 00:21:39.892 "base_bdevs_list": [ 00:21:39.892 { 00:21:39.892 "name": "BaseBdev1", 00:21:39.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.892 "is_configured": false, 00:21:39.892 "data_offset": 0, 00:21:39.892 "data_size": 0 00:21:39.892 }, 00:21:39.892 { 00:21:39.892 "name": "BaseBdev2", 00:21:39.892 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:39.892 "is_configured": true, 00:21:39.892 "data_offset": 0, 00:21:39.892 "data_size": 65536 00:21:39.892 }, 00:21:39.892 { 00:21:39.892 "name": "BaseBdev3", 00:21:39.892 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:39.892 "is_configured": true, 00:21:39.892 "data_offset": 0, 00:21:39.892 "data_size": 65536 00:21:39.892 }, 00:21:39.892 { 00:21:39.892 "name": "BaseBdev4", 00:21:39.892 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:39.892 "is_configured": true, 00:21:39.892 "data_offset": 0, 00:21:39.892 "data_size": 65536 00:21:39.892 } 00:21:39.892 ] 00:21:39.892 }' 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.892 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.168 [2024-12-09 23:03:15.436620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.168 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.168 "name": "Existed_Raid", 00:21:40.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.168 "strip_size_kb": 64, 00:21:40.168 "state": "configuring", 00:21:40.168 "raid_level": "raid0", 00:21:40.168 "superblock": false, 00:21:40.169 "num_base_bdevs": 4, 00:21:40.169 "num_base_bdevs_discovered": 2, 00:21:40.169 "num_base_bdevs_operational": 4, 00:21:40.169 "base_bdevs_list": [ 00:21:40.169 { 00:21:40.169 "name": "BaseBdev1", 00:21:40.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.169 "is_configured": false, 00:21:40.169 "data_offset": 0, 00:21:40.169 "data_size": 0 00:21:40.169 }, 00:21:40.169 { 00:21:40.169 "name": null, 00:21:40.169 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:40.169 "is_configured": false, 00:21:40.169 "data_offset": 0, 00:21:40.169 "data_size": 65536 00:21:40.169 }, 00:21:40.169 { 00:21:40.169 "name": "BaseBdev3", 00:21:40.169 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:40.169 "is_configured": true, 00:21:40.169 "data_offset": 0, 00:21:40.169 "data_size": 65536 00:21:40.169 }, 00:21:40.169 { 00:21:40.169 "name": "BaseBdev4", 00:21:40.169 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:40.169 "is_configured": true, 00:21:40.169 "data_offset": 0, 00:21:40.169 "data_size": 65536 00:21:40.169 } 00:21:40.169 ] 00:21:40.169 }' 00:21:40.169 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.169 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.452 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.452 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.452 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:40.452 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.452 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.715 [2024-12-09 23:03:15.856828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.715 BaseBdev1 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.715 [ 00:21:40.715 { 00:21:40.715 "name": "BaseBdev1", 00:21:40.715 "aliases": [ 00:21:40.715 "6ccd3837-214b-41a3-947b-51d6556e96f1" 00:21:40.715 ], 00:21:40.715 "product_name": "Malloc disk", 00:21:40.715 "block_size": 512, 00:21:40.715 "num_blocks": 65536, 00:21:40.715 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:40.715 "assigned_rate_limits": { 00:21:40.715 "rw_ios_per_sec": 0, 00:21:40.715 "rw_mbytes_per_sec": 0, 00:21:40.715 "r_mbytes_per_sec": 0, 00:21:40.715 "w_mbytes_per_sec": 0 00:21:40.715 }, 00:21:40.715 "claimed": true, 00:21:40.715 "claim_type": "exclusive_write", 00:21:40.715 "zoned": false, 00:21:40.715 "supported_io_types": { 00:21:40.715 "read": true, 00:21:40.715 "write": true, 00:21:40.715 "unmap": true, 00:21:40.715 "flush": true, 00:21:40.715 "reset": true, 00:21:40.715 "nvme_admin": false, 00:21:40.715 "nvme_io": false, 00:21:40.715 "nvme_io_md": false, 00:21:40.715 "write_zeroes": true, 00:21:40.715 "zcopy": true, 00:21:40.715 "get_zone_info": false, 00:21:40.715 "zone_management": false, 00:21:40.715 "zone_append": false, 00:21:40.715 "compare": false, 00:21:40.715 "compare_and_write": false, 00:21:40.715 "abort": true, 00:21:40.715 "seek_hole": false, 00:21:40.715 "seek_data": false, 00:21:40.715 "copy": true, 00:21:40.715 "nvme_iov_md": false 00:21:40.715 }, 00:21:40.715 "memory_domains": [ 00:21:40.715 { 00:21:40.715 "dma_device_id": "system", 00:21:40.715 "dma_device_type": 1 00:21:40.715 }, 00:21:40.715 { 00:21:40.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.715 "dma_device_type": 2 00:21:40.715 } 00:21:40.715 ], 00:21:40.715 "driver_specific": {} 00:21:40.715 } 00:21:40.715 ] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.715 "name": "Existed_Raid", 00:21:40.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.715 "strip_size_kb": 64, 00:21:40.715 "state": "configuring", 00:21:40.715 "raid_level": "raid0", 00:21:40.715 "superblock": false, 00:21:40.715 "num_base_bdevs": 4, 00:21:40.715 "num_base_bdevs_discovered": 3, 00:21:40.715 "num_base_bdevs_operational": 4, 00:21:40.715 "base_bdevs_list": [ 00:21:40.715 { 00:21:40.715 "name": "BaseBdev1", 00:21:40.715 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:40.715 "is_configured": true, 00:21:40.715 "data_offset": 0, 00:21:40.715 "data_size": 65536 00:21:40.715 }, 00:21:40.715 { 00:21:40.715 "name": null, 00:21:40.715 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:40.715 "is_configured": false, 00:21:40.715 "data_offset": 0, 00:21:40.715 "data_size": 65536 00:21:40.715 }, 00:21:40.715 { 00:21:40.715 "name": "BaseBdev3", 00:21:40.715 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:40.715 "is_configured": true, 00:21:40.715 "data_offset": 0, 00:21:40.715 "data_size": 65536 00:21:40.715 }, 00:21:40.715 { 00:21:40.715 "name": "BaseBdev4", 00:21:40.715 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:40.715 "is_configured": true, 00:21:40.715 "data_offset": 0, 00:21:40.715 "data_size": 65536 00:21:40.715 } 00:21:40.715 ] 00:21:40.715 }' 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.715 23:03:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.977 [2024-12-09 23:03:16.233012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.977 "name": "Existed_Raid", 00:21:40.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.977 "strip_size_kb": 64, 00:21:40.977 "state": "configuring", 00:21:40.977 "raid_level": "raid0", 00:21:40.977 "superblock": false, 00:21:40.977 "num_base_bdevs": 4, 00:21:40.977 "num_base_bdevs_discovered": 2, 00:21:40.977 "num_base_bdevs_operational": 4, 00:21:40.977 "base_bdevs_list": [ 00:21:40.977 { 00:21:40.977 "name": "BaseBdev1", 00:21:40.977 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:40.977 "is_configured": true, 00:21:40.977 "data_offset": 0, 00:21:40.977 "data_size": 65536 00:21:40.977 }, 00:21:40.977 { 00:21:40.977 "name": null, 00:21:40.977 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:40.977 "is_configured": false, 00:21:40.977 "data_offset": 0, 00:21:40.977 "data_size": 65536 00:21:40.977 }, 00:21:40.977 { 00:21:40.977 "name": null, 00:21:40.977 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:40.977 "is_configured": false, 00:21:40.977 "data_offset": 0, 00:21:40.977 "data_size": 65536 00:21:40.977 }, 00:21:40.977 { 00:21:40.977 "name": "BaseBdev4", 00:21:40.977 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:40.977 "is_configured": true, 00:21:40.977 "data_offset": 0, 00:21:40.977 "data_size": 65536 00:21:40.977 } 00:21:40.977 ] 00:21:40.977 }' 00:21:40.977 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.978 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.238 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.500 [2024-12-09 23:03:16.601093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.500 "name": "Existed_Raid", 00:21:41.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.500 "strip_size_kb": 64, 00:21:41.500 "state": "configuring", 00:21:41.500 "raid_level": "raid0", 00:21:41.500 "superblock": false, 00:21:41.500 "num_base_bdevs": 4, 00:21:41.500 "num_base_bdevs_discovered": 3, 00:21:41.500 "num_base_bdevs_operational": 4, 00:21:41.500 "base_bdevs_list": [ 00:21:41.500 { 00:21:41.500 "name": "BaseBdev1", 00:21:41.500 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:41.500 "is_configured": true, 00:21:41.500 "data_offset": 0, 00:21:41.500 "data_size": 65536 00:21:41.500 }, 00:21:41.500 { 00:21:41.500 "name": null, 00:21:41.500 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:41.500 "is_configured": false, 00:21:41.500 "data_offset": 0, 00:21:41.500 "data_size": 65536 00:21:41.500 }, 00:21:41.500 { 00:21:41.500 "name": "BaseBdev3", 00:21:41.500 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:41.500 "is_configured": true, 00:21:41.500 "data_offset": 0, 00:21:41.500 "data_size": 65536 00:21:41.500 }, 00:21:41.500 { 00:21:41.500 "name": "BaseBdev4", 00:21:41.500 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:41.500 "is_configured": true, 00:21:41.500 "data_offset": 0, 00:21:41.500 "data_size": 65536 00:21:41.500 } 00:21:41.500 ] 00:21:41.500 }' 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.500 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.761 23:03:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.761 [2024-12-09 23:03:16.977233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.761 "name": "Existed_Raid", 00:21:41.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.761 "strip_size_kb": 64, 00:21:41.761 "state": "configuring", 00:21:41.761 "raid_level": "raid0", 00:21:41.761 "superblock": false, 00:21:41.761 "num_base_bdevs": 4, 00:21:41.761 "num_base_bdevs_discovered": 2, 00:21:41.761 "num_base_bdevs_operational": 4, 00:21:41.761 "base_bdevs_list": [ 00:21:41.761 { 00:21:41.761 "name": null, 00:21:41.761 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:41.761 "is_configured": false, 00:21:41.761 "data_offset": 0, 00:21:41.761 "data_size": 65536 00:21:41.761 }, 00:21:41.761 { 00:21:41.761 "name": null, 00:21:41.761 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:41.761 "is_configured": false, 00:21:41.761 "data_offset": 0, 00:21:41.761 "data_size": 65536 00:21:41.761 }, 00:21:41.761 { 00:21:41.761 "name": "BaseBdev3", 00:21:41.761 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:41.761 "is_configured": true, 00:21:41.761 "data_offset": 0, 00:21:41.761 "data_size": 65536 00:21:41.761 }, 00:21:41.761 { 00:21:41.761 "name": "BaseBdev4", 00:21:41.761 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:41.761 "is_configured": true, 00:21:41.761 "data_offset": 0, 00:21:41.761 "data_size": 65536 00:21:41.761 } 00:21:41.761 ] 00:21:41.761 }' 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.761 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.022 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.022 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.022 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.022 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.284 [2024-12-09 23:03:17.413580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.284 "name": "Existed_Raid", 00:21:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.284 "strip_size_kb": 64, 00:21:42.284 "state": "configuring", 00:21:42.284 "raid_level": "raid0", 00:21:42.284 "superblock": false, 00:21:42.284 "num_base_bdevs": 4, 00:21:42.284 "num_base_bdevs_discovered": 3, 00:21:42.284 "num_base_bdevs_operational": 4, 00:21:42.284 "base_bdevs_list": [ 00:21:42.284 { 00:21:42.284 "name": null, 00:21:42.284 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:42.284 "is_configured": false, 00:21:42.284 "data_offset": 0, 00:21:42.284 "data_size": 65536 00:21:42.284 }, 00:21:42.284 { 00:21:42.284 "name": "BaseBdev2", 00:21:42.284 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:42.284 "is_configured": true, 00:21:42.284 "data_offset": 0, 00:21:42.284 "data_size": 65536 00:21:42.284 }, 00:21:42.284 { 00:21:42.284 "name": "BaseBdev3", 00:21:42.284 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:42.284 "is_configured": true, 00:21:42.284 "data_offset": 0, 00:21:42.284 "data_size": 65536 00:21:42.284 }, 00:21:42.284 { 00:21:42.284 "name": "BaseBdev4", 00:21:42.284 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:42.284 "is_configured": true, 00:21:42.284 "data_offset": 0, 00:21:42.284 "data_size": 65536 00:21:42.284 } 00:21:42.284 ] 00:21:42.284 }' 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.284 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ccd3837-214b-41a3-947b-51d6556e96f1 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 [2024-12-09 23:03:17.877400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:42.547 [2024-12-09 23:03:17.877461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:42.547 [2024-12-09 23:03:17.877469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:42.547 [2024-12-09 23:03:17.877769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:42.547 [2024-12-09 23:03:17.877911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:42.547 [2024-12-09 23:03:17.877922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:42.547 [2024-12-09 23:03:17.878229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.547 NewBaseBdev 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 [ 00:21:42.547 { 00:21:42.547 "name": "NewBaseBdev", 00:21:42.547 "aliases": [ 00:21:42.547 "6ccd3837-214b-41a3-947b-51d6556e96f1" 00:21:42.547 ], 00:21:42.547 "product_name": "Malloc disk", 00:21:42.547 "block_size": 512, 00:21:42.547 "num_blocks": 65536, 00:21:42.547 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:42.547 "assigned_rate_limits": { 00:21:42.547 "rw_ios_per_sec": 0, 00:21:42.547 "rw_mbytes_per_sec": 0, 00:21:42.547 "r_mbytes_per_sec": 0, 00:21:42.547 "w_mbytes_per_sec": 0 00:21:42.547 }, 00:21:42.547 "claimed": true, 00:21:42.547 "claim_type": "exclusive_write", 00:21:42.547 "zoned": false, 00:21:42.547 "supported_io_types": { 00:21:42.547 "read": true, 00:21:42.547 "write": true, 00:21:42.547 "unmap": true, 00:21:42.547 "flush": true, 00:21:42.547 "reset": true, 00:21:42.547 "nvme_admin": false, 00:21:42.547 "nvme_io": false, 00:21:42.547 "nvme_io_md": false, 00:21:42.547 "write_zeroes": true, 00:21:42.547 "zcopy": true, 00:21:42.547 "get_zone_info": false, 00:21:42.547 "zone_management": false, 00:21:42.547 "zone_append": false, 00:21:42.547 "compare": false, 00:21:42.547 "compare_and_write": false, 00:21:42.547 "abort": true, 00:21:42.547 "seek_hole": false, 00:21:42.547 "seek_data": false, 00:21:42.547 "copy": true, 00:21:42.547 "nvme_iov_md": false 00:21:42.547 }, 00:21:42.547 "memory_domains": [ 00:21:42.547 { 00:21:42.547 "dma_device_id": "system", 00:21:42.547 "dma_device_type": 1 00:21:42.547 }, 00:21:42.547 { 00:21:42.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.547 "dma_device_type": 2 00:21:42.547 } 00:21:42.547 ], 00:21:42.547 "driver_specific": {} 00:21:42.547 } 00:21:42.547 ] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.547 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.852 "name": "Existed_Raid", 00:21:42.852 "uuid": "50c3d182-dfb5-4963-bcda-2e0f445ddbe4", 00:21:42.852 "strip_size_kb": 64, 00:21:42.852 "state": "online", 00:21:42.852 "raid_level": "raid0", 00:21:42.852 "superblock": false, 00:21:42.852 "num_base_bdevs": 4, 00:21:42.852 "num_base_bdevs_discovered": 4, 00:21:42.852 "num_base_bdevs_operational": 4, 00:21:42.852 "base_bdevs_list": [ 00:21:42.852 { 00:21:42.852 "name": "NewBaseBdev", 00:21:42.852 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:42.852 "is_configured": true, 00:21:42.852 "data_offset": 0, 00:21:42.852 "data_size": 65536 00:21:42.852 }, 00:21:42.852 { 00:21:42.852 "name": "BaseBdev2", 00:21:42.852 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:42.852 "is_configured": true, 00:21:42.852 "data_offset": 0, 00:21:42.852 "data_size": 65536 00:21:42.852 }, 00:21:42.852 { 00:21:42.852 "name": "BaseBdev3", 00:21:42.852 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:42.852 "is_configured": true, 00:21:42.852 "data_offset": 0, 00:21:42.852 "data_size": 65536 00:21:42.852 }, 00:21:42.852 { 00:21:42.852 "name": "BaseBdev4", 00:21:42.852 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:42.852 "is_configured": true, 00:21:42.852 "data_offset": 0, 00:21:42.852 "data_size": 65536 00:21:42.852 } 00:21:42.852 ] 00:21:42.852 }' 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.852 23:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.114 [2024-12-09 23:03:18.249977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.114 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:43.114 "name": "Existed_Raid", 00:21:43.114 "aliases": [ 00:21:43.114 "50c3d182-dfb5-4963-bcda-2e0f445ddbe4" 00:21:43.114 ], 00:21:43.114 "product_name": "Raid Volume", 00:21:43.114 "block_size": 512, 00:21:43.114 "num_blocks": 262144, 00:21:43.114 "uuid": "50c3d182-dfb5-4963-bcda-2e0f445ddbe4", 00:21:43.114 "assigned_rate_limits": { 00:21:43.114 "rw_ios_per_sec": 0, 00:21:43.114 "rw_mbytes_per_sec": 0, 00:21:43.114 "r_mbytes_per_sec": 0, 00:21:43.114 "w_mbytes_per_sec": 0 00:21:43.114 }, 00:21:43.114 "claimed": false, 00:21:43.114 "zoned": false, 00:21:43.114 "supported_io_types": { 00:21:43.114 "read": true, 00:21:43.114 "write": true, 00:21:43.114 "unmap": true, 00:21:43.114 "flush": true, 00:21:43.114 "reset": true, 00:21:43.114 "nvme_admin": false, 00:21:43.114 "nvme_io": false, 00:21:43.114 "nvme_io_md": false, 00:21:43.114 "write_zeroes": true, 00:21:43.114 "zcopy": false, 00:21:43.114 "get_zone_info": false, 00:21:43.114 "zone_management": false, 00:21:43.114 "zone_append": false, 00:21:43.114 "compare": false, 00:21:43.114 "compare_and_write": false, 00:21:43.114 "abort": false, 00:21:43.114 "seek_hole": false, 00:21:43.114 "seek_data": false, 00:21:43.114 "copy": false, 00:21:43.114 "nvme_iov_md": false 00:21:43.114 }, 00:21:43.114 "memory_domains": [ 00:21:43.114 { 00:21:43.114 "dma_device_id": "system", 00:21:43.114 "dma_device_type": 1 00:21:43.114 }, 00:21:43.114 { 00:21:43.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.114 "dma_device_type": 2 00:21:43.114 }, 00:21:43.114 { 00:21:43.114 "dma_device_id": "system", 00:21:43.114 "dma_device_type": 1 00:21:43.114 }, 00:21:43.114 { 00:21:43.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.114 "dma_device_type": 2 00:21:43.114 }, 00:21:43.115 { 00:21:43.115 "dma_device_id": "system", 00:21:43.115 "dma_device_type": 1 00:21:43.115 }, 00:21:43.115 { 00:21:43.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.115 "dma_device_type": 2 00:21:43.115 }, 00:21:43.115 { 00:21:43.115 "dma_device_id": "system", 00:21:43.115 "dma_device_type": 1 00:21:43.115 }, 00:21:43.115 { 00:21:43.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.115 "dma_device_type": 2 00:21:43.115 } 00:21:43.115 ], 00:21:43.115 "driver_specific": { 00:21:43.115 "raid": { 00:21:43.115 "uuid": "50c3d182-dfb5-4963-bcda-2e0f445ddbe4", 00:21:43.115 "strip_size_kb": 64, 00:21:43.115 "state": "online", 00:21:43.115 "raid_level": "raid0", 00:21:43.115 "superblock": false, 00:21:43.115 "num_base_bdevs": 4, 00:21:43.115 "num_base_bdevs_discovered": 4, 00:21:43.115 "num_base_bdevs_operational": 4, 00:21:43.115 "base_bdevs_list": [ 00:21:43.115 { 00:21:43.115 "name": "NewBaseBdev", 00:21:43.115 "uuid": "6ccd3837-214b-41a3-947b-51d6556e96f1", 00:21:43.115 "is_configured": true, 00:21:43.115 "data_offset": 0, 00:21:43.115 "data_size": 65536 00:21:43.115 }, 00:21:43.115 { 00:21:43.115 "name": "BaseBdev2", 00:21:43.115 "uuid": "a8055682-efbe-4a8e-8ec9-8c660794c4fe", 00:21:43.115 "is_configured": true, 00:21:43.115 "data_offset": 0, 00:21:43.115 "data_size": 65536 00:21:43.115 }, 00:21:43.115 { 00:21:43.115 "name": "BaseBdev3", 00:21:43.115 "uuid": "bb77524d-d385-49c1-b8ef-7b9b3e71e1da", 00:21:43.115 "is_configured": true, 00:21:43.115 "data_offset": 0, 00:21:43.115 "data_size": 65536 00:21:43.115 }, 00:21:43.115 { 00:21:43.115 "name": "BaseBdev4", 00:21:43.115 "uuid": "06991b49-5361-4628-972d-d57ef4bf1322", 00:21:43.115 "is_configured": true, 00:21:43.115 "data_offset": 0, 00:21:43.115 "data_size": 65536 00:21:43.115 } 00:21:43.115 ] 00:21:43.115 } 00:21:43.115 } 00:21:43.115 }' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:43.115 BaseBdev2 00:21:43.115 BaseBdev3 00:21:43.115 BaseBdev4' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.115 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.376 [2024-12-09 23:03:18.489610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.376 [2024-12-09 23:03:18.489647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.376 [2024-12-09 23:03:18.489737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.376 [2024-12-09 23:03:18.489816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.376 [2024-12-09 23:03:18.489828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67655 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67655 ']' 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67655 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67655 00:21:43.376 killing process with pid 67655 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67655' 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67655 00:21:43.376 [2024-12-09 23:03:18.524195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:43.376 23:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67655 00:21:43.637 [2024-12-09 23:03:18.808405] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.584 ************************************ 00:21:44.584 END TEST raid_state_function_test 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:44.584 00:21:44.584 real 0m9.089s 00:21:44.584 user 0m14.150s 00:21:44.584 sys 0m1.665s 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.584 ************************************ 00:21:44.584 23:03:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:44.584 23:03:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:44.584 23:03:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.584 23:03:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.584 ************************************ 00:21:44.584 START TEST raid_state_function_test_sb 00:21:44.584 ************************************ 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:44.584 Process raid pid: 68304 00:21:44.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68304 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68304' 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68304 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68304 ']' 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.584 23:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.584 [2024-12-09 23:03:19.799986] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:44.584 [2024-12-09 23:03:19.800420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.844 [2024-12-09 23:03:19.967656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.844 [2024-12-09 23:03:20.178882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.106 [2024-12-09 23:03:20.350533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.106 [2024-12-09 23:03:20.350871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.368 [2024-12-09 23:03:20.689777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.368 [2024-12-09 23:03:20.689860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.368 [2024-12-09 23:03:20.689872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.368 [2024-12-09 23:03:20.689883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.368 [2024-12-09 23:03:20.689889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.368 [2024-12-09 23:03:20.689899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.368 [2024-12-09 23:03:20.689906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:45.368 [2024-12-09 23:03:20.689915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.368 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.700 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.700 "name": "Existed_Raid", 00:21:45.700 "uuid": "3ae195a3-4587-484f-8479-53f8a3e27860", 00:21:45.700 "strip_size_kb": 64, 00:21:45.700 "state": "configuring", 00:21:45.700 "raid_level": "raid0", 00:21:45.700 "superblock": true, 00:21:45.700 "num_base_bdevs": 4, 00:21:45.700 "num_base_bdevs_discovered": 0, 00:21:45.700 "num_base_bdevs_operational": 4, 00:21:45.700 "base_bdevs_list": [ 00:21:45.700 { 00:21:45.700 "name": "BaseBdev1", 00:21:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.700 "is_configured": false, 00:21:45.700 "data_offset": 0, 00:21:45.700 "data_size": 0 00:21:45.700 }, 00:21:45.700 { 00:21:45.700 "name": "BaseBdev2", 00:21:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.700 "is_configured": false, 00:21:45.700 "data_offset": 0, 00:21:45.700 "data_size": 0 00:21:45.700 }, 00:21:45.700 { 00:21:45.700 "name": "BaseBdev3", 00:21:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.700 "is_configured": false, 00:21:45.700 "data_offset": 0, 00:21:45.700 "data_size": 0 00:21:45.700 }, 00:21:45.700 { 00:21:45.700 "name": "BaseBdev4", 00:21:45.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.700 "is_configured": false, 00:21:45.700 "data_offset": 0, 00:21:45.700 "data_size": 0 00:21:45.700 } 00:21:45.700 ] 00:21:45.700 }' 00:21:45.700 23:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.700 23:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 [2024-12-09 23:03:21.017772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:45.700 [2024-12-09 23:03:21.017827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.700 [2024-12-09 23:03:21.029800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:45.700 [2024-12-09 23:03:21.029863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:45.700 [2024-12-09 23:03:21.029872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.700 [2024-12-09 23:03:21.029882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.700 [2024-12-09 23:03:21.029889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.700 [2024-12-09 23:03:21.029898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.700 [2024-12-09 23:03:21.029905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:45.700 [2024-12-09 23:03:21.029914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.700 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.962 [2024-12-09 23:03:21.068056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.962 BaseBdev1 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.962 [ 00:21:45.962 { 00:21:45.962 "name": "BaseBdev1", 00:21:45.962 "aliases": [ 00:21:45.962 "e35f9a8d-9556-474e-ad1d-51123dd5691c" 00:21:45.962 ], 00:21:45.962 "product_name": "Malloc disk", 00:21:45.962 "block_size": 512, 00:21:45.962 "num_blocks": 65536, 00:21:45.962 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:45.962 "assigned_rate_limits": { 00:21:45.962 "rw_ios_per_sec": 0, 00:21:45.962 "rw_mbytes_per_sec": 0, 00:21:45.962 "r_mbytes_per_sec": 0, 00:21:45.962 "w_mbytes_per_sec": 0 00:21:45.962 }, 00:21:45.962 "claimed": true, 00:21:45.962 "claim_type": "exclusive_write", 00:21:45.962 "zoned": false, 00:21:45.962 "supported_io_types": { 00:21:45.962 "read": true, 00:21:45.962 "write": true, 00:21:45.962 "unmap": true, 00:21:45.962 "flush": true, 00:21:45.962 "reset": true, 00:21:45.962 "nvme_admin": false, 00:21:45.962 "nvme_io": false, 00:21:45.962 "nvme_io_md": false, 00:21:45.962 "write_zeroes": true, 00:21:45.962 "zcopy": true, 00:21:45.962 "get_zone_info": false, 00:21:45.962 "zone_management": false, 00:21:45.962 "zone_append": false, 00:21:45.962 "compare": false, 00:21:45.962 "compare_and_write": false, 00:21:45.962 "abort": true, 00:21:45.962 "seek_hole": false, 00:21:45.962 "seek_data": false, 00:21:45.962 "copy": true, 00:21:45.962 "nvme_iov_md": false 00:21:45.962 }, 00:21:45.962 "memory_domains": [ 00:21:45.962 { 00:21:45.962 "dma_device_id": "system", 00:21:45.962 "dma_device_type": 1 00:21:45.962 }, 00:21:45.962 { 00:21:45.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.962 "dma_device_type": 2 00:21:45.962 } 00:21:45.962 ], 00:21:45.962 "driver_specific": {} 00:21:45.962 } 00:21:45.962 ] 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.962 "name": "Existed_Raid", 00:21:45.962 "uuid": "97d46045-2f68-4ae1-b9c8-ef9ac83edd50", 00:21:45.962 "strip_size_kb": 64, 00:21:45.962 "state": "configuring", 00:21:45.962 "raid_level": "raid0", 00:21:45.962 "superblock": true, 00:21:45.962 "num_base_bdevs": 4, 00:21:45.962 "num_base_bdevs_discovered": 1, 00:21:45.962 "num_base_bdevs_operational": 4, 00:21:45.962 "base_bdevs_list": [ 00:21:45.962 { 00:21:45.962 "name": "BaseBdev1", 00:21:45.962 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:45.962 "is_configured": true, 00:21:45.962 "data_offset": 2048, 00:21:45.962 "data_size": 63488 00:21:45.962 }, 00:21:45.962 { 00:21:45.962 "name": "BaseBdev2", 00:21:45.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.962 "is_configured": false, 00:21:45.962 "data_offset": 0, 00:21:45.962 "data_size": 0 00:21:45.962 }, 00:21:45.962 { 00:21:45.962 "name": "BaseBdev3", 00:21:45.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.962 "is_configured": false, 00:21:45.962 "data_offset": 0, 00:21:45.962 "data_size": 0 00:21:45.962 }, 00:21:45.962 { 00:21:45.962 "name": "BaseBdev4", 00:21:45.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.962 "is_configured": false, 00:21:45.962 "data_offset": 0, 00:21:45.962 "data_size": 0 00:21:45.962 } 00:21:45.962 ] 00:21:45.962 }' 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.962 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.223 [2024-12-09 23:03:21.428221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:46.223 [2024-12-09 23:03:21.428288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.223 [2024-12-09 23:03:21.436293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.223 [2024-12-09 23:03:21.438526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.223 [2024-12-09 23:03:21.438589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.223 [2024-12-09 23:03:21.438601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:46.223 [2024-12-09 23:03:21.438615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:46.223 [2024-12-09 23:03:21.438622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:46.223 [2024-12-09 23:03:21.438632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:46.223 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.224 "name": "Existed_Raid", 00:21:46.224 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:46.224 "strip_size_kb": 64, 00:21:46.224 "state": "configuring", 00:21:46.224 "raid_level": "raid0", 00:21:46.224 "superblock": true, 00:21:46.224 "num_base_bdevs": 4, 00:21:46.224 "num_base_bdevs_discovered": 1, 00:21:46.224 "num_base_bdevs_operational": 4, 00:21:46.224 "base_bdevs_list": [ 00:21:46.224 { 00:21:46.224 "name": "BaseBdev1", 00:21:46.224 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:46.224 "is_configured": true, 00:21:46.224 "data_offset": 2048, 00:21:46.224 "data_size": 63488 00:21:46.224 }, 00:21:46.224 { 00:21:46.224 "name": "BaseBdev2", 00:21:46.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.224 "is_configured": false, 00:21:46.224 "data_offset": 0, 00:21:46.224 "data_size": 0 00:21:46.224 }, 00:21:46.224 { 00:21:46.224 "name": "BaseBdev3", 00:21:46.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.224 "is_configured": false, 00:21:46.224 "data_offset": 0, 00:21:46.224 "data_size": 0 00:21:46.224 }, 00:21:46.224 { 00:21:46.224 "name": "BaseBdev4", 00:21:46.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.224 "is_configured": false, 00:21:46.224 "data_offset": 0, 00:21:46.224 "data_size": 0 00:21:46.224 } 00:21:46.224 ] 00:21:46.224 }' 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.224 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.485 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:46.485 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.485 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.485 [2024-12-09 23:03:21.788455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.485 BaseBdev2 00:21:46.485 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.485 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:46.485 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.486 [ 00:21:46.486 { 00:21:46.486 "name": "BaseBdev2", 00:21:46.486 "aliases": [ 00:21:46.486 "c5f45d95-2898-4c03-884d-63ffe8b6df17" 00:21:46.486 ], 00:21:46.486 "product_name": "Malloc disk", 00:21:46.486 "block_size": 512, 00:21:46.486 "num_blocks": 65536, 00:21:46.486 "uuid": "c5f45d95-2898-4c03-884d-63ffe8b6df17", 00:21:46.486 "assigned_rate_limits": { 00:21:46.486 "rw_ios_per_sec": 0, 00:21:46.486 "rw_mbytes_per_sec": 0, 00:21:46.486 "r_mbytes_per_sec": 0, 00:21:46.486 "w_mbytes_per_sec": 0 00:21:46.486 }, 00:21:46.486 "claimed": true, 00:21:46.486 "claim_type": "exclusive_write", 00:21:46.486 "zoned": false, 00:21:46.486 "supported_io_types": { 00:21:46.486 "read": true, 00:21:46.486 "write": true, 00:21:46.486 "unmap": true, 00:21:46.486 "flush": true, 00:21:46.486 "reset": true, 00:21:46.486 "nvme_admin": false, 00:21:46.486 "nvme_io": false, 00:21:46.486 "nvme_io_md": false, 00:21:46.486 "write_zeroes": true, 00:21:46.486 "zcopy": true, 00:21:46.486 "get_zone_info": false, 00:21:46.486 "zone_management": false, 00:21:46.486 "zone_append": false, 00:21:46.486 "compare": false, 00:21:46.486 "compare_and_write": false, 00:21:46.486 "abort": true, 00:21:46.486 "seek_hole": false, 00:21:46.486 "seek_data": false, 00:21:46.486 "copy": true, 00:21:46.486 "nvme_iov_md": false 00:21:46.486 }, 00:21:46.486 "memory_domains": [ 00:21:46.486 { 00:21:46.486 "dma_device_id": "system", 00:21:46.486 "dma_device_type": 1 00:21:46.486 }, 00:21:46.486 { 00:21:46.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.486 "dma_device_type": 2 00:21:46.486 } 00:21:46.486 ], 00:21:46.486 "driver_specific": {} 00:21:46.486 } 00:21:46.486 ] 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.486 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.745 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.745 "name": "Existed_Raid", 00:21:46.745 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:46.745 "strip_size_kb": 64, 00:21:46.745 "state": "configuring", 00:21:46.745 "raid_level": "raid0", 00:21:46.745 "superblock": true, 00:21:46.745 "num_base_bdevs": 4, 00:21:46.745 "num_base_bdevs_discovered": 2, 00:21:46.745 "num_base_bdevs_operational": 4, 00:21:46.745 "base_bdevs_list": [ 00:21:46.745 { 00:21:46.745 "name": "BaseBdev1", 00:21:46.745 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:46.745 "is_configured": true, 00:21:46.745 "data_offset": 2048, 00:21:46.745 "data_size": 63488 00:21:46.745 }, 00:21:46.745 { 00:21:46.745 "name": "BaseBdev2", 00:21:46.745 "uuid": "c5f45d95-2898-4c03-884d-63ffe8b6df17", 00:21:46.745 "is_configured": true, 00:21:46.745 "data_offset": 2048, 00:21:46.745 "data_size": 63488 00:21:46.745 }, 00:21:46.745 { 00:21:46.745 "name": "BaseBdev3", 00:21:46.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.745 "is_configured": false, 00:21:46.745 "data_offset": 0, 00:21:46.745 "data_size": 0 00:21:46.745 }, 00:21:46.745 { 00:21:46.745 "name": "BaseBdev4", 00:21:46.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.745 "is_configured": false, 00:21:46.745 "data_offset": 0, 00:21:46.745 "data_size": 0 00:21:46.745 } 00:21:46.745 ] 00:21:46.745 }' 00:21:46.745 23:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.745 23:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.006 [2024-12-09 23:03:22.185336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:47.006 BaseBdev3 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.006 [ 00:21:47.006 { 00:21:47.006 "name": "BaseBdev3", 00:21:47.006 "aliases": [ 00:21:47.006 "8e932267-f2bb-4b43-a78e-2dc4752c4408" 00:21:47.006 ], 00:21:47.006 "product_name": "Malloc disk", 00:21:47.006 "block_size": 512, 00:21:47.006 "num_blocks": 65536, 00:21:47.006 "uuid": "8e932267-f2bb-4b43-a78e-2dc4752c4408", 00:21:47.006 "assigned_rate_limits": { 00:21:47.006 "rw_ios_per_sec": 0, 00:21:47.006 "rw_mbytes_per_sec": 0, 00:21:47.006 "r_mbytes_per_sec": 0, 00:21:47.006 "w_mbytes_per_sec": 0 00:21:47.006 }, 00:21:47.006 "claimed": true, 00:21:47.006 "claim_type": "exclusive_write", 00:21:47.006 "zoned": false, 00:21:47.006 "supported_io_types": { 00:21:47.006 "read": true, 00:21:47.006 "write": true, 00:21:47.006 "unmap": true, 00:21:47.006 "flush": true, 00:21:47.006 "reset": true, 00:21:47.006 "nvme_admin": false, 00:21:47.006 "nvme_io": false, 00:21:47.006 "nvme_io_md": false, 00:21:47.006 "write_zeroes": true, 00:21:47.006 "zcopy": true, 00:21:47.006 "get_zone_info": false, 00:21:47.006 "zone_management": false, 00:21:47.006 "zone_append": false, 00:21:47.006 "compare": false, 00:21:47.006 "compare_and_write": false, 00:21:47.006 "abort": true, 00:21:47.006 "seek_hole": false, 00:21:47.006 "seek_data": false, 00:21:47.006 "copy": true, 00:21:47.006 "nvme_iov_md": false 00:21:47.006 }, 00:21:47.006 "memory_domains": [ 00:21:47.006 { 00:21:47.006 "dma_device_id": "system", 00:21:47.006 "dma_device_type": 1 00:21:47.006 }, 00:21:47.006 { 00:21:47.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.006 "dma_device_type": 2 00:21:47.006 } 00:21:47.006 ], 00:21:47.006 "driver_specific": {} 00:21:47.006 } 00:21:47.006 ] 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.006 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.006 "name": "Existed_Raid", 00:21:47.006 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:47.006 "strip_size_kb": 64, 00:21:47.006 "state": "configuring", 00:21:47.006 "raid_level": "raid0", 00:21:47.006 "superblock": true, 00:21:47.006 "num_base_bdevs": 4, 00:21:47.006 "num_base_bdevs_discovered": 3, 00:21:47.006 "num_base_bdevs_operational": 4, 00:21:47.006 "base_bdevs_list": [ 00:21:47.006 { 00:21:47.006 "name": "BaseBdev1", 00:21:47.006 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:47.006 "is_configured": true, 00:21:47.006 "data_offset": 2048, 00:21:47.006 "data_size": 63488 00:21:47.007 }, 00:21:47.007 { 00:21:47.007 "name": "BaseBdev2", 00:21:47.007 "uuid": "c5f45d95-2898-4c03-884d-63ffe8b6df17", 00:21:47.007 "is_configured": true, 00:21:47.007 "data_offset": 2048, 00:21:47.007 "data_size": 63488 00:21:47.007 }, 00:21:47.007 { 00:21:47.007 "name": "BaseBdev3", 00:21:47.007 "uuid": "8e932267-f2bb-4b43-a78e-2dc4752c4408", 00:21:47.007 "is_configured": true, 00:21:47.007 "data_offset": 2048, 00:21:47.007 "data_size": 63488 00:21:47.007 }, 00:21:47.007 { 00:21:47.007 "name": "BaseBdev4", 00:21:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.007 "is_configured": false, 00:21:47.007 "data_offset": 0, 00:21:47.007 "data_size": 0 00:21:47.007 } 00:21:47.007 ] 00:21:47.007 }' 00:21:47.007 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.007 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.267 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:47.267 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.267 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.267 [2024-12-09 23:03:22.626187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:47.267 [2024-12-09 23:03:22.626803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:47.267 [2024-12-09 23:03:22.626831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:47.267 [2024-12-09 23:03:22.627189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:47.267 [2024-12-09 23:03:22.627350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:47.267 BaseBdev4 00:21:47.267 [2024-12-09 23:03:22.627363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:47.267 [2024-12-09 23:03:22.627519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.528 [ 00:21:47.528 { 00:21:47.528 "name": "BaseBdev4", 00:21:47.528 "aliases": [ 00:21:47.528 "1a48e4d3-eeba-4a18-9265-e11a43e9c1c0" 00:21:47.528 ], 00:21:47.528 "product_name": "Malloc disk", 00:21:47.528 "block_size": 512, 00:21:47.528 "num_blocks": 65536, 00:21:47.528 "uuid": "1a48e4d3-eeba-4a18-9265-e11a43e9c1c0", 00:21:47.528 "assigned_rate_limits": { 00:21:47.528 "rw_ios_per_sec": 0, 00:21:47.528 "rw_mbytes_per_sec": 0, 00:21:47.528 "r_mbytes_per_sec": 0, 00:21:47.528 "w_mbytes_per_sec": 0 00:21:47.528 }, 00:21:47.528 "claimed": true, 00:21:47.528 "claim_type": "exclusive_write", 00:21:47.528 "zoned": false, 00:21:47.528 "supported_io_types": { 00:21:47.528 "read": true, 00:21:47.528 "write": true, 00:21:47.528 "unmap": true, 00:21:47.528 "flush": true, 00:21:47.528 "reset": true, 00:21:47.528 "nvme_admin": false, 00:21:47.528 "nvme_io": false, 00:21:47.528 "nvme_io_md": false, 00:21:47.528 "write_zeroes": true, 00:21:47.528 "zcopy": true, 00:21:47.528 "get_zone_info": false, 00:21:47.528 "zone_management": false, 00:21:47.528 "zone_append": false, 00:21:47.528 "compare": false, 00:21:47.528 "compare_and_write": false, 00:21:47.528 "abort": true, 00:21:47.528 "seek_hole": false, 00:21:47.528 "seek_data": false, 00:21:47.528 "copy": true, 00:21:47.528 "nvme_iov_md": false 00:21:47.528 }, 00:21:47.528 "memory_domains": [ 00:21:47.528 { 00:21:47.528 "dma_device_id": "system", 00:21:47.528 "dma_device_type": 1 00:21:47.528 }, 00:21:47.528 { 00:21:47.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.528 "dma_device_type": 2 00:21:47.528 } 00:21:47.528 ], 00:21:47.528 "driver_specific": {} 00:21:47.528 } 00:21:47.528 ] 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.528 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.528 "name": "Existed_Raid", 00:21:47.528 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:47.528 "strip_size_kb": 64, 00:21:47.528 "state": "online", 00:21:47.528 "raid_level": "raid0", 00:21:47.528 "superblock": true, 00:21:47.528 "num_base_bdevs": 4, 00:21:47.528 "num_base_bdevs_discovered": 4, 00:21:47.528 "num_base_bdevs_operational": 4, 00:21:47.528 "base_bdevs_list": [ 00:21:47.528 { 00:21:47.528 "name": "BaseBdev1", 00:21:47.528 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:47.528 "is_configured": true, 00:21:47.528 "data_offset": 2048, 00:21:47.528 "data_size": 63488 00:21:47.528 }, 00:21:47.528 { 00:21:47.528 "name": "BaseBdev2", 00:21:47.528 "uuid": "c5f45d95-2898-4c03-884d-63ffe8b6df17", 00:21:47.528 "is_configured": true, 00:21:47.528 "data_offset": 2048, 00:21:47.529 "data_size": 63488 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "name": "BaseBdev3", 00:21:47.529 "uuid": "8e932267-f2bb-4b43-a78e-2dc4752c4408", 00:21:47.529 "is_configured": true, 00:21:47.529 "data_offset": 2048, 00:21:47.529 "data_size": 63488 00:21:47.529 }, 00:21:47.529 { 00:21:47.529 "name": "BaseBdev4", 00:21:47.529 "uuid": "1a48e4d3-eeba-4a18-9265-e11a43e9c1c0", 00:21:47.529 "is_configured": true, 00:21:47.529 "data_offset": 2048, 00:21:47.529 "data_size": 63488 00:21:47.529 } 00:21:47.529 ] 00:21:47.529 }' 00:21:47.529 23:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.529 23:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:47.790 [2024-12-09 23:03:23.014745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:47.790 "name": "Existed_Raid", 00:21:47.790 "aliases": [ 00:21:47.790 "d08d3b56-b215-4420-93b3-ebfb018283a2" 00:21:47.790 ], 00:21:47.790 "product_name": "Raid Volume", 00:21:47.790 "block_size": 512, 00:21:47.790 "num_blocks": 253952, 00:21:47.790 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:47.790 "assigned_rate_limits": { 00:21:47.790 "rw_ios_per_sec": 0, 00:21:47.790 "rw_mbytes_per_sec": 0, 00:21:47.790 "r_mbytes_per_sec": 0, 00:21:47.790 "w_mbytes_per_sec": 0 00:21:47.790 }, 00:21:47.790 "claimed": false, 00:21:47.790 "zoned": false, 00:21:47.790 "supported_io_types": { 00:21:47.790 "read": true, 00:21:47.790 "write": true, 00:21:47.790 "unmap": true, 00:21:47.790 "flush": true, 00:21:47.790 "reset": true, 00:21:47.790 "nvme_admin": false, 00:21:47.790 "nvme_io": false, 00:21:47.790 "nvme_io_md": false, 00:21:47.790 "write_zeroes": true, 00:21:47.790 "zcopy": false, 00:21:47.790 "get_zone_info": false, 00:21:47.790 "zone_management": false, 00:21:47.790 "zone_append": false, 00:21:47.790 "compare": false, 00:21:47.790 "compare_and_write": false, 00:21:47.790 "abort": false, 00:21:47.790 "seek_hole": false, 00:21:47.790 "seek_data": false, 00:21:47.790 "copy": false, 00:21:47.790 "nvme_iov_md": false 00:21:47.790 }, 00:21:47.790 "memory_domains": [ 00:21:47.790 { 00:21:47.790 "dma_device_id": "system", 00:21:47.790 "dma_device_type": 1 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.790 "dma_device_type": 2 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "system", 00:21:47.790 "dma_device_type": 1 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.790 "dma_device_type": 2 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "system", 00:21:47.790 "dma_device_type": 1 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.790 "dma_device_type": 2 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "system", 00:21:47.790 "dma_device_type": 1 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.790 "dma_device_type": 2 00:21:47.790 } 00:21:47.790 ], 00:21:47.790 "driver_specific": { 00:21:47.790 "raid": { 00:21:47.790 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:47.790 "strip_size_kb": 64, 00:21:47.790 "state": "online", 00:21:47.790 "raid_level": "raid0", 00:21:47.790 "superblock": true, 00:21:47.790 "num_base_bdevs": 4, 00:21:47.790 "num_base_bdevs_discovered": 4, 00:21:47.790 "num_base_bdevs_operational": 4, 00:21:47.790 "base_bdevs_list": [ 00:21:47.790 { 00:21:47.790 "name": "BaseBdev1", 00:21:47.790 "uuid": "e35f9a8d-9556-474e-ad1d-51123dd5691c", 00:21:47.790 "is_configured": true, 00:21:47.790 "data_offset": 2048, 00:21:47.790 "data_size": 63488 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "name": "BaseBdev2", 00:21:47.790 "uuid": "c5f45d95-2898-4c03-884d-63ffe8b6df17", 00:21:47.790 "is_configured": true, 00:21:47.790 "data_offset": 2048, 00:21:47.790 "data_size": 63488 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "name": "BaseBdev3", 00:21:47.790 "uuid": "8e932267-f2bb-4b43-a78e-2dc4752c4408", 00:21:47.790 "is_configured": true, 00:21:47.790 "data_offset": 2048, 00:21:47.790 "data_size": 63488 00:21:47.790 }, 00:21:47.790 { 00:21:47.790 "name": "BaseBdev4", 00:21:47.790 "uuid": "1a48e4d3-eeba-4a18-9265-e11a43e9c1c0", 00:21:47.790 "is_configured": true, 00:21:47.790 "data_offset": 2048, 00:21:47.790 "data_size": 63488 00:21:47.790 } 00:21:47.790 ] 00:21:47.790 } 00:21:47.790 } 00:21:47.790 }' 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:47.790 BaseBdev2 00:21:47.790 BaseBdev3 00:21:47.790 BaseBdev4' 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.790 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.052 [2024-12-09 23:03:23.258511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.052 [2024-12-09 23:03:23.258557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:48.052 [2024-12-09 23:03:23.258621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:48.052 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.053 "name": "Existed_Raid", 00:21:48.053 "uuid": "d08d3b56-b215-4420-93b3-ebfb018283a2", 00:21:48.053 "strip_size_kb": 64, 00:21:48.053 "state": "offline", 00:21:48.053 "raid_level": "raid0", 00:21:48.053 "superblock": true, 00:21:48.053 "num_base_bdevs": 4, 00:21:48.053 "num_base_bdevs_discovered": 3, 00:21:48.053 "num_base_bdevs_operational": 3, 00:21:48.053 "base_bdevs_list": [ 00:21:48.053 { 00:21:48.053 "name": null, 00:21:48.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.053 "is_configured": false, 00:21:48.053 "data_offset": 0, 00:21:48.053 "data_size": 63488 00:21:48.053 }, 00:21:48.053 { 00:21:48.053 "name": "BaseBdev2", 00:21:48.053 "uuid": "c5f45d95-2898-4c03-884d-63ffe8b6df17", 00:21:48.053 "is_configured": true, 00:21:48.053 "data_offset": 2048, 00:21:48.053 "data_size": 63488 00:21:48.053 }, 00:21:48.053 { 00:21:48.053 "name": "BaseBdev3", 00:21:48.053 "uuid": "8e932267-f2bb-4b43-a78e-2dc4752c4408", 00:21:48.053 "is_configured": true, 00:21:48.053 "data_offset": 2048, 00:21:48.053 "data_size": 63488 00:21:48.053 }, 00:21:48.053 { 00:21:48.053 "name": "BaseBdev4", 00:21:48.053 "uuid": "1a48e4d3-eeba-4a18-9265-e11a43e9c1c0", 00:21:48.053 "is_configured": true, 00:21:48.053 "data_offset": 2048, 00:21:48.053 "data_size": 63488 00:21:48.053 } 00:21:48.053 ] 00:21:48.053 }' 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.053 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 [2024-12-09 23:03:23.719031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 [2024-12-09 23:03:23.829717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.623 23:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.623 [2024-12-09 23:03:23.948494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:48.623 [2024-12-09 23:03:23.948611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 BaseBdev2 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 [ 00:21:48.882 { 00:21:48.882 "name": "BaseBdev2", 00:21:48.882 "aliases": [ 00:21:48.882 "35a78a94-128a-42a3-8b49-7b6eba21ed1e" 00:21:48.882 ], 00:21:48.882 "product_name": "Malloc disk", 00:21:48.882 "block_size": 512, 00:21:48.882 "num_blocks": 65536, 00:21:48.882 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:48.882 "assigned_rate_limits": { 00:21:48.882 "rw_ios_per_sec": 0, 00:21:48.882 "rw_mbytes_per_sec": 0, 00:21:48.882 "r_mbytes_per_sec": 0, 00:21:48.882 "w_mbytes_per_sec": 0 00:21:48.882 }, 00:21:48.882 "claimed": false, 00:21:48.882 "zoned": false, 00:21:48.882 "supported_io_types": { 00:21:48.882 "read": true, 00:21:48.882 "write": true, 00:21:48.882 "unmap": true, 00:21:48.882 "flush": true, 00:21:48.882 "reset": true, 00:21:48.882 "nvme_admin": false, 00:21:48.882 "nvme_io": false, 00:21:48.882 "nvme_io_md": false, 00:21:48.882 "write_zeroes": true, 00:21:48.882 "zcopy": true, 00:21:48.882 "get_zone_info": false, 00:21:48.882 "zone_management": false, 00:21:48.882 "zone_append": false, 00:21:48.882 "compare": false, 00:21:48.882 "compare_and_write": false, 00:21:48.882 "abort": true, 00:21:48.882 "seek_hole": false, 00:21:48.882 "seek_data": false, 00:21:48.882 "copy": true, 00:21:48.882 "nvme_iov_md": false 00:21:48.882 }, 00:21:48.882 "memory_domains": [ 00:21:48.882 { 00:21:48.882 "dma_device_id": "system", 00:21:48.882 "dma_device_type": 1 00:21:48.882 }, 00:21:48.882 { 00:21:48.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.882 "dma_device_type": 2 00:21:48.882 } 00:21:48.882 ], 00:21:48.882 "driver_specific": {} 00:21:48.882 } 00:21:48.882 ] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 BaseBdev3 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 [ 00:21:48.882 { 00:21:48.882 "name": "BaseBdev3", 00:21:48.882 "aliases": [ 00:21:48.882 "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b" 00:21:48.882 ], 00:21:48.882 "product_name": "Malloc disk", 00:21:48.882 "block_size": 512, 00:21:48.882 "num_blocks": 65536, 00:21:48.882 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:48.882 "assigned_rate_limits": { 00:21:48.882 "rw_ios_per_sec": 0, 00:21:48.882 "rw_mbytes_per_sec": 0, 00:21:48.882 "r_mbytes_per_sec": 0, 00:21:48.882 "w_mbytes_per_sec": 0 00:21:48.882 }, 00:21:48.882 "claimed": false, 00:21:48.882 "zoned": false, 00:21:48.882 "supported_io_types": { 00:21:48.882 "read": true, 00:21:48.882 "write": true, 00:21:48.882 "unmap": true, 00:21:48.882 "flush": true, 00:21:48.882 "reset": true, 00:21:48.882 "nvme_admin": false, 00:21:48.882 "nvme_io": false, 00:21:48.882 "nvme_io_md": false, 00:21:48.882 "write_zeroes": true, 00:21:48.882 "zcopy": true, 00:21:48.882 "get_zone_info": false, 00:21:48.882 "zone_management": false, 00:21:48.882 "zone_append": false, 00:21:48.882 "compare": false, 00:21:48.882 "compare_and_write": false, 00:21:48.882 "abort": true, 00:21:48.882 "seek_hole": false, 00:21:48.882 "seek_data": false, 00:21:48.882 "copy": true, 00:21:48.882 "nvme_iov_md": false 00:21:48.882 }, 00:21:48.882 "memory_domains": [ 00:21:48.882 { 00:21:48.882 "dma_device_id": "system", 00:21:48.882 "dma_device_type": 1 00:21:48.882 }, 00:21:48.882 { 00:21:48.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.882 "dma_device_type": 2 00:21:48.882 } 00:21:48.882 ], 00:21:48.882 "driver_specific": {} 00:21:48.882 } 00:21:48.882 ] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 BaseBdev4 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.882 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.142 [ 00:21:49.142 { 00:21:49.142 "name": "BaseBdev4", 00:21:49.142 "aliases": [ 00:21:49.142 "b0ff1915-87a9-4cb8-a800-041232dd2c5b" 00:21:49.142 ], 00:21:49.142 "product_name": "Malloc disk", 00:21:49.142 "block_size": 512, 00:21:49.142 "num_blocks": 65536, 00:21:49.143 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:49.143 "assigned_rate_limits": { 00:21:49.143 "rw_ios_per_sec": 0, 00:21:49.143 "rw_mbytes_per_sec": 0, 00:21:49.143 "r_mbytes_per_sec": 0, 00:21:49.143 "w_mbytes_per_sec": 0 00:21:49.143 }, 00:21:49.143 "claimed": false, 00:21:49.143 "zoned": false, 00:21:49.143 "supported_io_types": { 00:21:49.143 "read": true, 00:21:49.143 "write": true, 00:21:49.143 "unmap": true, 00:21:49.143 "flush": true, 00:21:49.143 "reset": true, 00:21:49.143 "nvme_admin": false, 00:21:49.143 "nvme_io": false, 00:21:49.143 "nvme_io_md": false, 00:21:49.143 "write_zeroes": true, 00:21:49.143 "zcopy": true, 00:21:49.143 "get_zone_info": false, 00:21:49.143 "zone_management": false, 00:21:49.143 "zone_append": false, 00:21:49.143 "compare": false, 00:21:49.143 "compare_and_write": false, 00:21:49.143 "abort": true, 00:21:49.143 "seek_hole": false, 00:21:49.143 "seek_data": false, 00:21:49.143 "copy": true, 00:21:49.143 "nvme_iov_md": false 00:21:49.143 }, 00:21:49.143 "memory_domains": [ 00:21:49.143 { 00:21:49.143 "dma_device_id": "system", 00:21:49.143 "dma_device_type": 1 00:21:49.143 }, 00:21:49.143 { 00:21:49.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.143 "dma_device_type": 2 00:21:49.143 } 00:21:49.143 ], 00:21:49.143 "driver_specific": {} 00:21:49.143 } 00:21:49.143 ] 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.143 [2024-12-09 23:03:24.253011] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:49.143 [2024-12-09 23:03:24.253266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:49.143 [2024-12-09 23:03:24.253364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.143 [2024-12-09 23:03:24.255722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:49.143 [2024-12-09 23:03:24.255938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.143 "name": "Existed_Raid", 00:21:49.143 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:49.143 "strip_size_kb": 64, 00:21:49.143 "state": "configuring", 00:21:49.143 "raid_level": "raid0", 00:21:49.143 "superblock": true, 00:21:49.143 "num_base_bdevs": 4, 00:21:49.143 "num_base_bdevs_discovered": 3, 00:21:49.143 "num_base_bdevs_operational": 4, 00:21:49.143 "base_bdevs_list": [ 00:21:49.143 { 00:21:49.143 "name": "BaseBdev1", 00:21:49.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.143 "is_configured": false, 00:21:49.143 "data_offset": 0, 00:21:49.143 "data_size": 0 00:21:49.143 }, 00:21:49.143 { 00:21:49.143 "name": "BaseBdev2", 00:21:49.143 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:49.143 "is_configured": true, 00:21:49.143 "data_offset": 2048, 00:21:49.143 "data_size": 63488 00:21:49.143 }, 00:21:49.143 { 00:21:49.143 "name": "BaseBdev3", 00:21:49.143 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:49.143 "is_configured": true, 00:21:49.143 "data_offset": 2048, 00:21:49.143 "data_size": 63488 00:21:49.143 }, 00:21:49.143 { 00:21:49.143 "name": "BaseBdev4", 00:21:49.143 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:49.143 "is_configured": true, 00:21:49.143 "data_offset": 2048, 00:21:49.143 "data_size": 63488 00:21:49.143 } 00:21:49.143 ] 00:21:49.143 }' 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.143 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.404 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:49.404 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.404 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.404 [2024-12-09 23:03:24.581085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.405 "name": "Existed_Raid", 00:21:49.405 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:49.405 "strip_size_kb": 64, 00:21:49.405 "state": "configuring", 00:21:49.405 "raid_level": "raid0", 00:21:49.405 "superblock": true, 00:21:49.405 "num_base_bdevs": 4, 00:21:49.405 "num_base_bdevs_discovered": 2, 00:21:49.405 "num_base_bdevs_operational": 4, 00:21:49.405 "base_bdevs_list": [ 00:21:49.405 { 00:21:49.405 "name": "BaseBdev1", 00:21:49.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.405 "is_configured": false, 00:21:49.405 "data_offset": 0, 00:21:49.405 "data_size": 0 00:21:49.405 }, 00:21:49.405 { 00:21:49.405 "name": null, 00:21:49.405 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:49.405 "is_configured": false, 00:21:49.405 "data_offset": 0, 00:21:49.405 "data_size": 63488 00:21:49.405 }, 00:21:49.405 { 00:21:49.405 "name": "BaseBdev3", 00:21:49.405 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:49.405 "is_configured": true, 00:21:49.405 "data_offset": 2048, 00:21:49.405 "data_size": 63488 00:21:49.405 }, 00:21:49.405 { 00:21:49.405 "name": "BaseBdev4", 00:21:49.405 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:49.405 "is_configured": true, 00:21:49.405 "data_offset": 2048, 00:21:49.405 "data_size": 63488 00:21:49.405 } 00:21:49.405 ] 00:21:49.405 }' 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.405 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.666 23:03:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.666 [2024-12-09 23:03:25.010677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.666 BaseBdev1 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.667 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.982 [ 00:21:49.982 { 00:21:49.982 "name": "BaseBdev1", 00:21:49.982 "aliases": [ 00:21:49.982 "47494339-7842-4cf3-bc11-96e65b60763d" 00:21:49.982 ], 00:21:49.982 "product_name": "Malloc disk", 00:21:49.982 "block_size": 512, 00:21:49.982 "num_blocks": 65536, 00:21:49.982 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:49.982 "assigned_rate_limits": { 00:21:49.982 "rw_ios_per_sec": 0, 00:21:49.982 "rw_mbytes_per_sec": 0, 00:21:49.982 "r_mbytes_per_sec": 0, 00:21:49.982 "w_mbytes_per_sec": 0 00:21:49.982 }, 00:21:49.982 "claimed": true, 00:21:49.982 "claim_type": "exclusive_write", 00:21:49.982 "zoned": false, 00:21:49.982 "supported_io_types": { 00:21:49.982 "read": true, 00:21:49.982 "write": true, 00:21:49.982 "unmap": true, 00:21:49.982 "flush": true, 00:21:49.982 "reset": true, 00:21:49.982 "nvme_admin": false, 00:21:49.982 "nvme_io": false, 00:21:49.982 "nvme_io_md": false, 00:21:49.982 "write_zeroes": true, 00:21:49.982 "zcopy": true, 00:21:49.982 "get_zone_info": false, 00:21:49.982 "zone_management": false, 00:21:49.982 "zone_append": false, 00:21:49.982 "compare": false, 00:21:49.982 "compare_and_write": false, 00:21:49.982 "abort": true, 00:21:49.982 "seek_hole": false, 00:21:49.982 "seek_data": false, 00:21:49.982 "copy": true, 00:21:49.982 "nvme_iov_md": false 00:21:49.982 }, 00:21:49.982 "memory_domains": [ 00:21:49.982 { 00:21:49.982 "dma_device_id": "system", 00:21:49.982 "dma_device_type": 1 00:21:49.982 }, 00:21:49.982 { 00:21:49.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.982 "dma_device_type": 2 00:21:49.982 } 00:21:49.982 ], 00:21:49.982 "driver_specific": {} 00:21:49.982 } 00:21:49.982 ] 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.982 "name": "Existed_Raid", 00:21:49.982 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:49.982 "strip_size_kb": 64, 00:21:49.982 "state": "configuring", 00:21:49.982 "raid_level": "raid0", 00:21:49.982 "superblock": true, 00:21:49.982 "num_base_bdevs": 4, 00:21:49.982 "num_base_bdevs_discovered": 3, 00:21:49.982 "num_base_bdevs_operational": 4, 00:21:49.982 "base_bdevs_list": [ 00:21:49.982 { 00:21:49.982 "name": "BaseBdev1", 00:21:49.982 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:49.982 "is_configured": true, 00:21:49.982 "data_offset": 2048, 00:21:49.982 "data_size": 63488 00:21:49.982 }, 00:21:49.982 { 00:21:49.982 "name": null, 00:21:49.982 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:49.982 "is_configured": false, 00:21:49.982 "data_offset": 0, 00:21:49.982 "data_size": 63488 00:21:49.982 }, 00:21:49.982 { 00:21:49.982 "name": "BaseBdev3", 00:21:49.982 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:49.982 "is_configured": true, 00:21:49.982 "data_offset": 2048, 00:21:49.982 "data_size": 63488 00:21:49.982 }, 00:21:49.982 { 00:21:49.982 "name": "BaseBdev4", 00:21:49.982 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:49.982 "is_configured": true, 00:21:49.982 "data_offset": 2048, 00:21:49.982 "data_size": 63488 00:21:49.982 } 00:21:49.982 ] 00:21:49.982 }' 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.982 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:50.244 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.245 [2024-12-09 23:03:25.390895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.245 "name": "Existed_Raid", 00:21:50.245 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:50.245 "strip_size_kb": 64, 00:21:50.245 "state": "configuring", 00:21:50.245 "raid_level": "raid0", 00:21:50.245 "superblock": true, 00:21:50.245 "num_base_bdevs": 4, 00:21:50.245 "num_base_bdevs_discovered": 2, 00:21:50.245 "num_base_bdevs_operational": 4, 00:21:50.245 "base_bdevs_list": [ 00:21:50.245 { 00:21:50.245 "name": "BaseBdev1", 00:21:50.245 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:50.245 "is_configured": true, 00:21:50.245 "data_offset": 2048, 00:21:50.245 "data_size": 63488 00:21:50.245 }, 00:21:50.245 { 00:21:50.245 "name": null, 00:21:50.245 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:50.245 "is_configured": false, 00:21:50.245 "data_offset": 0, 00:21:50.245 "data_size": 63488 00:21:50.245 }, 00:21:50.245 { 00:21:50.245 "name": null, 00:21:50.245 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:50.245 "is_configured": false, 00:21:50.245 "data_offset": 0, 00:21:50.245 "data_size": 63488 00:21:50.245 }, 00:21:50.245 { 00:21:50.245 "name": "BaseBdev4", 00:21:50.245 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:50.245 "is_configured": true, 00:21:50.245 "data_offset": 2048, 00:21:50.245 "data_size": 63488 00:21:50.245 } 00:21:50.245 ] 00:21:50.245 }' 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.245 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.507 [2024-12-09 23:03:25.770954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.507 "name": "Existed_Raid", 00:21:50.507 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:50.507 "strip_size_kb": 64, 00:21:50.507 "state": "configuring", 00:21:50.507 "raid_level": "raid0", 00:21:50.507 "superblock": true, 00:21:50.507 "num_base_bdevs": 4, 00:21:50.507 "num_base_bdevs_discovered": 3, 00:21:50.507 "num_base_bdevs_operational": 4, 00:21:50.507 "base_bdevs_list": [ 00:21:50.507 { 00:21:50.507 "name": "BaseBdev1", 00:21:50.507 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:50.507 "is_configured": true, 00:21:50.507 "data_offset": 2048, 00:21:50.507 "data_size": 63488 00:21:50.507 }, 00:21:50.507 { 00:21:50.507 "name": null, 00:21:50.507 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:50.507 "is_configured": false, 00:21:50.507 "data_offset": 0, 00:21:50.507 "data_size": 63488 00:21:50.507 }, 00:21:50.507 { 00:21:50.507 "name": "BaseBdev3", 00:21:50.507 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:50.507 "is_configured": true, 00:21:50.507 "data_offset": 2048, 00:21:50.507 "data_size": 63488 00:21:50.507 }, 00:21:50.507 { 00:21:50.507 "name": "BaseBdev4", 00:21:50.507 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:50.507 "is_configured": true, 00:21:50.507 "data_offset": 2048, 00:21:50.507 "data_size": 63488 00:21:50.507 } 00:21:50.507 ] 00:21:50.507 }' 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.507 23:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.768 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.768 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.768 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.768 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:50.768 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.026 [2024-12-09 23:03:26.147097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.026 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.026 "name": "Existed_Raid", 00:21:51.026 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:51.026 "strip_size_kb": 64, 00:21:51.026 "state": "configuring", 00:21:51.026 "raid_level": "raid0", 00:21:51.026 "superblock": true, 00:21:51.026 "num_base_bdevs": 4, 00:21:51.026 "num_base_bdevs_discovered": 2, 00:21:51.026 "num_base_bdevs_operational": 4, 00:21:51.026 "base_bdevs_list": [ 00:21:51.026 { 00:21:51.026 "name": null, 00:21:51.026 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:51.026 "is_configured": false, 00:21:51.026 "data_offset": 0, 00:21:51.026 "data_size": 63488 00:21:51.026 }, 00:21:51.026 { 00:21:51.026 "name": null, 00:21:51.026 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:51.026 "is_configured": false, 00:21:51.027 "data_offset": 0, 00:21:51.027 "data_size": 63488 00:21:51.027 }, 00:21:51.027 { 00:21:51.027 "name": "BaseBdev3", 00:21:51.027 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:51.027 "is_configured": true, 00:21:51.027 "data_offset": 2048, 00:21:51.027 "data_size": 63488 00:21:51.027 }, 00:21:51.027 { 00:21:51.027 "name": "BaseBdev4", 00:21:51.027 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:51.027 "is_configured": true, 00:21:51.027 "data_offset": 2048, 00:21:51.027 "data_size": 63488 00:21:51.027 } 00:21:51.027 ] 00:21:51.027 }' 00:21:51.027 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.027 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 [2024-12-09 23:03:26.575759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.287 "name": "Existed_Raid", 00:21:51.287 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:51.287 "strip_size_kb": 64, 00:21:51.287 "state": "configuring", 00:21:51.287 "raid_level": "raid0", 00:21:51.287 "superblock": true, 00:21:51.287 "num_base_bdevs": 4, 00:21:51.287 "num_base_bdevs_discovered": 3, 00:21:51.287 "num_base_bdevs_operational": 4, 00:21:51.287 "base_bdevs_list": [ 00:21:51.287 { 00:21:51.287 "name": null, 00:21:51.287 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:51.287 "is_configured": false, 00:21:51.287 "data_offset": 0, 00:21:51.287 "data_size": 63488 00:21:51.287 }, 00:21:51.287 { 00:21:51.287 "name": "BaseBdev2", 00:21:51.287 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:51.287 "is_configured": true, 00:21:51.287 "data_offset": 2048, 00:21:51.287 "data_size": 63488 00:21:51.287 }, 00:21:51.287 { 00:21:51.287 "name": "BaseBdev3", 00:21:51.287 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:51.287 "is_configured": true, 00:21:51.287 "data_offset": 2048, 00:21:51.287 "data_size": 63488 00:21:51.287 }, 00:21:51.287 { 00:21:51.287 "name": "BaseBdev4", 00:21:51.287 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:51.287 "is_configured": true, 00:21:51.287 "data_offset": 2048, 00:21:51.287 "data_size": 63488 00:21:51.287 } 00:21:51.287 ] 00:21:51.287 }' 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.287 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.548 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:51.548 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 47494339-7842-4cf3-bc11-96e65b60763d 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.808 [2024-12-09 23:03:26.994433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:51.808 [2024-12-09 23:03:26.994734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:51.808 [2024-12-09 23:03:26.994750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:51.808 [2024-12-09 23:03:26.995061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:51.808 NewBaseBdev 00:21:51.808 [2024-12-09 23:03:26.995241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:51.808 [2024-12-09 23:03:26.995254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:51.808 [2024-12-09 23:03:26.995396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.808 23:03:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.808 [ 00:21:51.808 { 00:21:51.808 "name": "NewBaseBdev", 00:21:51.808 "aliases": [ 00:21:51.808 "47494339-7842-4cf3-bc11-96e65b60763d" 00:21:51.808 ], 00:21:51.808 "product_name": "Malloc disk", 00:21:51.808 "block_size": 512, 00:21:51.808 "num_blocks": 65536, 00:21:51.808 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:51.808 "assigned_rate_limits": { 00:21:51.808 "rw_ios_per_sec": 0, 00:21:51.808 "rw_mbytes_per_sec": 0, 00:21:51.808 "r_mbytes_per_sec": 0, 00:21:51.808 "w_mbytes_per_sec": 0 00:21:51.808 }, 00:21:51.808 "claimed": true, 00:21:51.808 "claim_type": "exclusive_write", 00:21:51.808 "zoned": false, 00:21:51.808 "supported_io_types": { 00:21:51.808 "read": true, 00:21:51.808 "write": true, 00:21:51.808 "unmap": true, 00:21:51.808 "flush": true, 00:21:51.808 "reset": true, 00:21:51.808 "nvme_admin": false, 00:21:51.808 "nvme_io": false, 00:21:51.808 "nvme_io_md": false, 00:21:51.808 "write_zeroes": true, 00:21:51.808 "zcopy": true, 00:21:51.808 "get_zone_info": false, 00:21:51.808 "zone_management": false, 00:21:51.808 "zone_append": false, 00:21:51.808 "compare": false, 00:21:51.808 "compare_and_write": false, 00:21:51.808 "abort": true, 00:21:51.808 "seek_hole": false, 00:21:51.808 "seek_data": false, 00:21:51.808 "copy": true, 00:21:51.808 "nvme_iov_md": false 00:21:51.808 }, 00:21:51.808 "memory_domains": [ 00:21:51.808 { 00:21:51.808 "dma_device_id": "system", 00:21:51.808 "dma_device_type": 1 00:21:51.808 }, 00:21:51.808 { 00:21:51.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.808 "dma_device_type": 2 00:21:51.808 } 00:21:51.808 ], 00:21:51.808 "driver_specific": {} 00:21:51.808 } 00:21:51.808 ] 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.808 "name": "Existed_Raid", 00:21:51.808 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:51.808 "strip_size_kb": 64, 00:21:51.808 "state": "online", 00:21:51.808 "raid_level": "raid0", 00:21:51.808 "superblock": true, 00:21:51.808 "num_base_bdevs": 4, 00:21:51.808 "num_base_bdevs_discovered": 4, 00:21:51.808 "num_base_bdevs_operational": 4, 00:21:51.808 "base_bdevs_list": [ 00:21:51.808 { 00:21:51.808 "name": "NewBaseBdev", 00:21:51.808 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:51.808 "is_configured": true, 00:21:51.808 "data_offset": 2048, 00:21:51.808 "data_size": 63488 00:21:51.808 }, 00:21:51.808 { 00:21:51.808 "name": "BaseBdev2", 00:21:51.808 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:51.808 "is_configured": true, 00:21:51.808 "data_offset": 2048, 00:21:51.808 "data_size": 63488 00:21:51.808 }, 00:21:51.808 { 00:21:51.808 "name": "BaseBdev3", 00:21:51.808 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:51.808 "is_configured": true, 00:21:51.808 "data_offset": 2048, 00:21:51.808 "data_size": 63488 00:21:51.808 }, 00:21:51.808 { 00:21:51.808 "name": "BaseBdev4", 00:21:51.808 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:51.808 "is_configured": true, 00:21:51.808 "data_offset": 2048, 00:21:51.808 "data_size": 63488 00:21:51.808 } 00:21:51.808 ] 00:21:51.808 }' 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.808 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.069 [2024-12-09 23:03:27.391087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.069 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:52.069 "name": "Existed_Raid", 00:21:52.069 "aliases": [ 00:21:52.069 "90f4569f-9ab6-4f91-b068-9d294691d504" 00:21:52.069 ], 00:21:52.069 "product_name": "Raid Volume", 00:21:52.069 "block_size": 512, 00:21:52.069 "num_blocks": 253952, 00:21:52.069 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:52.069 "assigned_rate_limits": { 00:21:52.069 "rw_ios_per_sec": 0, 00:21:52.069 "rw_mbytes_per_sec": 0, 00:21:52.069 "r_mbytes_per_sec": 0, 00:21:52.069 "w_mbytes_per_sec": 0 00:21:52.069 }, 00:21:52.069 "claimed": false, 00:21:52.069 "zoned": false, 00:21:52.069 "supported_io_types": { 00:21:52.069 "read": true, 00:21:52.069 "write": true, 00:21:52.069 "unmap": true, 00:21:52.069 "flush": true, 00:21:52.069 "reset": true, 00:21:52.069 "nvme_admin": false, 00:21:52.069 "nvme_io": false, 00:21:52.069 "nvme_io_md": false, 00:21:52.069 "write_zeroes": true, 00:21:52.069 "zcopy": false, 00:21:52.069 "get_zone_info": false, 00:21:52.069 "zone_management": false, 00:21:52.069 "zone_append": false, 00:21:52.069 "compare": false, 00:21:52.069 "compare_and_write": false, 00:21:52.069 "abort": false, 00:21:52.069 "seek_hole": false, 00:21:52.069 "seek_data": false, 00:21:52.069 "copy": false, 00:21:52.069 "nvme_iov_md": false 00:21:52.069 }, 00:21:52.069 "memory_domains": [ 00:21:52.069 { 00:21:52.069 "dma_device_id": "system", 00:21:52.069 "dma_device_type": 1 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.069 "dma_device_type": 2 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "system", 00:21:52.069 "dma_device_type": 1 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.069 "dma_device_type": 2 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "system", 00:21:52.069 "dma_device_type": 1 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.069 "dma_device_type": 2 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "system", 00:21:52.069 "dma_device_type": 1 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.069 "dma_device_type": 2 00:21:52.069 } 00:21:52.069 ], 00:21:52.069 "driver_specific": { 00:21:52.069 "raid": { 00:21:52.069 "uuid": "90f4569f-9ab6-4f91-b068-9d294691d504", 00:21:52.069 "strip_size_kb": 64, 00:21:52.069 "state": "online", 00:21:52.069 "raid_level": "raid0", 00:21:52.069 "superblock": true, 00:21:52.069 "num_base_bdevs": 4, 00:21:52.069 "num_base_bdevs_discovered": 4, 00:21:52.069 "num_base_bdevs_operational": 4, 00:21:52.069 "base_bdevs_list": [ 00:21:52.069 { 00:21:52.069 "name": "NewBaseBdev", 00:21:52.069 "uuid": "47494339-7842-4cf3-bc11-96e65b60763d", 00:21:52.069 "is_configured": true, 00:21:52.069 "data_offset": 2048, 00:21:52.069 "data_size": 63488 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "name": "BaseBdev2", 00:21:52.069 "uuid": "35a78a94-128a-42a3-8b49-7b6eba21ed1e", 00:21:52.069 "is_configured": true, 00:21:52.069 "data_offset": 2048, 00:21:52.069 "data_size": 63488 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "name": "BaseBdev3", 00:21:52.069 "uuid": "bc49ba81-2142-4fd3-b5e5-e75b0dcf7a4b", 00:21:52.069 "is_configured": true, 00:21:52.069 "data_offset": 2048, 00:21:52.069 "data_size": 63488 00:21:52.069 }, 00:21:52.069 { 00:21:52.069 "name": "BaseBdev4", 00:21:52.069 "uuid": "b0ff1915-87a9-4cb8-a800-041232dd2c5b", 00:21:52.070 "is_configured": true, 00:21:52.070 "data_offset": 2048, 00:21:52.070 "data_size": 63488 00:21:52.070 } 00:21:52.070 ] 00:21:52.070 } 00:21:52.070 } 00:21:52.070 }' 00:21:52.070 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:52.352 BaseBdev2 00:21:52.352 BaseBdev3 00:21:52.352 BaseBdev4' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 [2024-12-09 23:03:27.634729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.352 [2024-12-09 23:03:27.634783] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.352 [2024-12-09 23:03:27.634887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.352 [2024-12-09 23:03:27.634998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.352 [2024-12-09 23:03:27.635011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68304 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68304 ']' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68304 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68304 00:21:52.352 killing process with pid 68304 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68304' 00:21:52.352 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68304 00:21:52.352 [2024-12-09 23:03:27.673836] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.353 23:03:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68304 00:21:52.924 [2024-12-09 23:03:27.984747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:53.495 ************************************ 00:21:53.495 END TEST raid_state_function_test_sb 00:21:53.495 ************************************ 00:21:53.495 23:03:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:53.495 00:21:53.495 real 0m9.122s 00:21:53.495 user 0m14.089s 00:21:53.495 sys 0m1.724s 00:21:53.495 23:03:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.495 23:03:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:53.755 23:03:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:21:53.755 23:03:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:53.755 23:03:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.755 23:03:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.755 ************************************ 00:21:53.755 START TEST raid_superblock_test 00:21:53.755 ************************************ 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:53.755 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68954 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68954 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:53.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68954 ']' 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.756 23:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.756 [2024-12-09 23:03:29.019468] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:53.756 [2024-12-09 23:03:29.019693] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68954 ] 00:21:54.016 [2024-12-09 23:03:29.200160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.016 [2024-12-09 23:03:29.353708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.277 [2024-12-09 23:03:29.548905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.277 [2024-12-09 23:03:29.549003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 malloc1 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 [2024-12-09 23:03:29.977520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:54.850 [2024-12-09 23:03:29.977611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.850 [2024-12-09 23:03:29.977638] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:54.850 [2024-12-09 23:03:29.977649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.850 [2024-12-09 23:03:29.980331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.850 [2024-12-09 23:03:29.980557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:54.850 pt1 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 malloc2 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 [2024-12-09 23:03:30.028973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:54.850 [2024-12-09 23:03:30.029068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.850 [2024-12-09 23:03:30.029118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:54.850 [2024-12-09 23:03:30.029129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.850 [2024-12-09 23:03:30.031799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.850 [2024-12-09 23:03:30.032029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:54.850 pt2 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 malloc3 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 [2024-12-09 23:03:30.092570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:54.850 [2024-12-09 23:03:30.092986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.850 [2024-12-09 23:03:30.093033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:54.850 [2024-12-09 23:03:30.093044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.850 [2024-12-09 23:03:30.095844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.850 [2024-12-09 23:03:30.095916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:54.850 pt3 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 malloc4 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.850 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.850 [2024-12-09 23:03:30.144912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:54.850 [2024-12-09 23:03:30.145010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.850 [2024-12-09 23:03:30.145042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:54.850 [2024-12-09 23:03:30.145060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.850 [2024-12-09 23:03:30.147792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.850 [2024-12-09 23:03:30.148009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:54.850 pt4 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.851 [2024-12-09 23:03:30.157052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:54.851 [2024-12-09 23:03:30.159363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:54.851 [2024-12-09 23:03:30.159480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:54.851 [2024-12-09 23:03:30.159554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:54.851 [2024-12-09 23:03:30.159779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:54.851 [2024-12-09 23:03:30.159790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:54.851 [2024-12-09 23:03:30.160151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:54.851 [2024-12-09 23:03:30.160337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:54.851 [2024-12-09 23:03:30.160349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:54.851 [2024-12-09 23:03:30.160554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.851 "name": "raid_bdev1", 00:21:54.851 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:54.851 "strip_size_kb": 64, 00:21:54.851 "state": "online", 00:21:54.851 "raid_level": "raid0", 00:21:54.851 "superblock": true, 00:21:54.851 "num_base_bdevs": 4, 00:21:54.851 "num_base_bdevs_discovered": 4, 00:21:54.851 "num_base_bdevs_operational": 4, 00:21:54.851 "base_bdevs_list": [ 00:21:54.851 { 00:21:54.851 "name": "pt1", 00:21:54.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:54.851 "is_configured": true, 00:21:54.851 "data_offset": 2048, 00:21:54.851 "data_size": 63488 00:21:54.851 }, 00:21:54.851 { 00:21:54.851 "name": "pt2", 00:21:54.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.851 "is_configured": true, 00:21:54.851 "data_offset": 2048, 00:21:54.851 "data_size": 63488 00:21:54.851 }, 00:21:54.851 { 00:21:54.851 "name": "pt3", 00:21:54.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:54.851 "is_configured": true, 00:21:54.851 "data_offset": 2048, 00:21:54.851 "data_size": 63488 00:21:54.851 }, 00:21:54.851 { 00:21:54.851 "name": "pt4", 00:21:54.851 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:54.851 "is_configured": true, 00:21:54.851 "data_offset": 2048, 00:21:54.851 "data_size": 63488 00:21:54.851 } 00:21:54.851 ] 00:21:54.851 }' 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.851 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:55.423 [2024-12-09 23:03:30.513495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.423 "name": "raid_bdev1", 00:21:55.423 "aliases": [ 00:21:55.423 "5a8dff4a-38b0-49ff-b689-55e0b31604f9" 00:21:55.423 ], 00:21:55.423 "product_name": "Raid Volume", 00:21:55.423 "block_size": 512, 00:21:55.423 "num_blocks": 253952, 00:21:55.423 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:55.423 "assigned_rate_limits": { 00:21:55.423 "rw_ios_per_sec": 0, 00:21:55.423 "rw_mbytes_per_sec": 0, 00:21:55.423 "r_mbytes_per_sec": 0, 00:21:55.423 "w_mbytes_per_sec": 0 00:21:55.423 }, 00:21:55.423 "claimed": false, 00:21:55.423 "zoned": false, 00:21:55.423 "supported_io_types": { 00:21:55.423 "read": true, 00:21:55.423 "write": true, 00:21:55.423 "unmap": true, 00:21:55.423 "flush": true, 00:21:55.423 "reset": true, 00:21:55.423 "nvme_admin": false, 00:21:55.423 "nvme_io": false, 00:21:55.423 "nvme_io_md": false, 00:21:55.423 "write_zeroes": true, 00:21:55.423 "zcopy": false, 00:21:55.423 "get_zone_info": false, 00:21:55.423 "zone_management": false, 00:21:55.423 "zone_append": false, 00:21:55.423 "compare": false, 00:21:55.423 "compare_and_write": false, 00:21:55.423 "abort": false, 00:21:55.423 "seek_hole": false, 00:21:55.423 "seek_data": false, 00:21:55.423 "copy": false, 00:21:55.423 "nvme_iov_md": false 00:21:55.423 }, 00:21:55.423 "memory_domains": [ 00:21:55.423 { 00:21:55.423 "dma_device_id": "system", 00:21:55.423 "dma_device_type": 1 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.423 "dma_device_type": 2 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "system", 00:21:55.423 "dma_device_type": 1 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.423 "dma_device_type": 2 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "system", 00:21:55.423 "dma_device_type": 1 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.423 "dma_device_type": 2 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "system", 00:21:55.423 "dma_device_type": 1 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.423 "dma_device_type": 2 00:21:55.423 } 00:21:55.423 ], 00:21:55.423 "driver_specific": { 00:21:55.423 "raid": { 00:21:55.423 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:55.423 "strip_size_kb": 64, 00:21:55.423 "state": "online", 00:21:55.423 "raid_level": "raid0", 00:21:55.423 "superblock": true, 00:21:55.423 "num_base_bdevs": 4, 00:21:55.423 "num_base_bdevs_discovered": 4, 00:21:55.423 "num_base_bdevs_operational": 4, 00:21:55.423 "base_bdevs_list": [ 00:21:55.423 { 00:21:55.423 "name": "pt1", 00:21:55.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.423 "is_configured": true, 00:21:55.423 "data_offset": 2048, 00:21:55.423 "data_size": 63488 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "name": "pt2", 00:21:55.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.423 "is_configured": true, 00:21:55.423 "data_offset": 2048, 00:21:55.423 "data_size": 63488 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "name": "pt3", 00:21:55.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:55.423 "is_configured": true, 00:21:55.423 "data_offset": 2048, 00:21:55.423 "data_size": 63488 00:21:55.423 }, 00:21:55.423 { 00:21:55.423 "name": "pt4", 00:21:55.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:55.423 "is_configured": true, 00:21:55.423 "data_offset": 2048, 00:21:55.423 "data_size": 63488 00:21:55.423 } 00:21:55.423 ] 00:21:55.423 } 00:21:55.423 } 00:21:55.423 }' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:55.423 pt2 00:21:55.423 pt3 00:21:55.423 pt4' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.423 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.424 [2024-12-09 23:03:30.765517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.424 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5a8dff4a-38b0-49ff-b689-55e0b31604f9 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5a8dff4a-38b0-49ff-b689-55e0b31604f9 ']' 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.684 [2024-12-09 23:03:30.813203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.684 [2024-12-09 23:03:30.813241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.684 [2024-12-09 23:03:30.813343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.684 [2024-12-09 23:03:30.813428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.684 [2024-12-09 23:03:30.813445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.684 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.685 [2024-12-09 23:03:30.949335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:55.685 [2024-12-09 23:03:30.952260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:55.685 [2024-12-09 23:03:30.952355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:55.685 [2024-12-09 23:03:30.952415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:55.685 [2024-12-09 23:03:30.952495] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:55.685 [2024-12-09 23:03:30.952572] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:55.685 [2024-12-09 23:03:30.952595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:55.685 [2024-12-09 23:03:30.952626] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:55.685 [2024-12-09 23:03:30.952648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.685 [2024-12-09 23:03:30.952672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:55.685 request: 00:21:55.685 { 00:21:55.685 "name": "raid_bdev1", 00:21:55.685 "raid_level": "raid0", 00:21:55.685 "base_bdevs": [ 00:21:55.685 "malloc1", 00:21:55.685 "malloc2", 00:21:55.685 "malloc3", 00:21:55.685 "malloc4" 00:21:55.685 ], 00:21:55.685 "strip_size_kb": 64, 00:21:55.685 "superblock": false, 00:21:55.685 "method": "bdev_raid_create", 00:21:55.685 "req_id": 1 00:21:55.685 } 00:21:55.685 Got JSON-RPC error response 00:21:55.685 response: 00:21:55.685 { 00:21:55.685 "code": -17, 00:21:55.685 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:55.685 } 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.685 23:03:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.685 [2024-12-09 23:03:31.021480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:55.685 [2024-12-09 23:03:31.021837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.685 [2024-12-09 23:03:31.021876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:55.685 [2024-12-09 23:03:31.021891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.685 [2024-12-09 23:03:31.024846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.685 [2024-12-09 23:03:31.025131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:55.685 [2024-12-09 23:03:31.025310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:55.685 [2024-12-09 23:03:31.025395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.685 pt1 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.685 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.945 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.945 "name": "raid_bdev1", 00:21:55.945 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:55.945 "strip_size_kb": 64, 00:21:55.945 "state": "configuring", 00:21:55.945 "raid_level": "raid0", 00:21:55.945 "superblock": true, 00:21:55.945 "num_base_bdevs": 4, 00:21:55.945 "num_base_bdevs_discovered": 1, 00:21:55.945 "num_base_bdevs_operational": 4, 00:21:55.945 "base_bdevs_list": [ 00:21:55.945 { 00:21:55.945 "name": "pt1", 00:21:55.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.945 "is_configured": true, 00:21:55.945 "data_offset": 2048, 00:21:55.945 "data_size": 63488 00:21:55.945 }, 00:21:55.945 { 00:21:55.945 "name": null, 00:21:55.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.945 "is_configured": false, 00:21:55.945 "data_offset": 2048, 00:21:55.945 "data_size": 63488 00:21:55.945 }, 00:21:55.945 { 00:21:55.945 "name": null, 00:21:55.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:55.945 "is_configured": false, 00:21:55.945 "data_offset": 2048, 00:21:55.945 "data_size": 63488 00:21:55.945 }, 00:21:55.945 { 00:21:55.945 "name": null, 00:21:55.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:55.945 "is_configured": false, 00:21:55.945 "data_offset": 2048, 00:21:55.945 "data_size": 63488 00:21:55.945 } 00:21:55.945 ] 00:21:55.945 }' 00:21:55.945 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.945 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.207 [2024-12-09 23:03:31.373745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.207 [2024-12-09 23:03:31.373854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.207 [2024-12-09 23:03:31.373878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:56.207 [2024-12-09 23:03:31.373890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.207 [2024-12-09 23:03:31.374426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.207 [2024-12-09 23:03:31.374448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.207 [2024-12-09 23:03:31.374546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:56.207 [2024-12-09 23:03:31.374582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:56.207 pt2 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.207 [2024-12-09 23:03:31.381775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.207 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.207 "name": "raid_bdev1", 00:21:56.207 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:56.207 "strip_size_kb": 64, 00:21:56.207 "state": "configuring", 00:21:56.207 "raid_level": "raid0", 00:21:56.208 "superblock": true, 00:21:56.208 "num_base_bdevs": 4, 00:21:56.208 "num_base_bdevs_discovered": 1, 00:21:56.208 "num_base_bdevs_operational": 4, 00:21:56.208 "base_bdevs_list": [ 00:21:56.208 { 00:21:56.208 "name": "pt1", 00:21:56.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.208 "is_configured": true, 00:21:56.208 "data_offset": 2048, 00:21:56.208 "data_size": 63488 00:21:56.208 }, 00:21:56.208 { 00:21:56.208 "name": null, 00:21:56.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.208 "is_configured": false, 00:21:56.208 "data_offset": 0, 00:21:56.208 "data_size": 63488 00:21:56.208 }, 00:21:56.208 { 00:21:56.208 "name": null, 00:21:56.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.208 "is_configured": false, 00:21:56.208 "data_offset": 2048, 00:21:56.208 "data_size": 63488 00:21:56.208 }, 00:21:56.208 { 00:21:56.208 "name": null, 00:21:56.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:56.208 "is_configured": false, 00:21:56.208 "data_offset": 2048, 00:21:56.208 "data_size": 63488 00:21:56.208 } 00:21:56.208 ] 00:21:56.208 }' 00:21:56.208 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.208 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.472 [2024-12-09 23:03:31.737848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.472 [2024-12-09 23:03:31.737950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.472 [2024-12-09 23:03:31.737976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:56.472 [2024-12-09 23:03:31.737987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.472 [2024-12-09 23:03:31.738539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.472 [2024-12-09 23:03:31.738577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.472 [2024-12-09 23:03:31.738680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:56.472 [2024-12-09 23:03:31.738706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:56.472 pt2 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.472 [2024-12-09 23:03:31.745828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:56.472 [2024-12-09 23:03:31.745903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.472 [2024-12-09 23:03:31.745926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:56.472 [2024-12-09 23:03:31.745935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.472 [2024-12-09 23:03:31.746454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.472 [2024-12-09 23:03:31.746472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:56.472 [2024-12-09 23:03:31.746561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:56.472 [2024-12-09 23:03:31.746589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:56.472 pt3 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.472 [2024-12-09 23:03:31.753790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:56.472 [2024-12-09 23:03:31.753861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.472 [2024-12-09 23:03:31.753884] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:56.472 [2024-12-09 23:03:31.753895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.472 [2024-12-09 23:03:31.754457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.472 [2024-12-09 23:03:31.754535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:56.472 [2024-12-09 23:03:31.754629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:56.472 [2024-12-09 23:03:31.754664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:56.472 [2024-12-09 23:03:31.754829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:56.472 [2024-12-09 23:03:31.754837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:56.472 [2024-12-09 23:03:31.755139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:56.472 [2024-12-09 23:03:31.755302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:56.472 [2024-12-09 23:03:31.755313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:56.472 [2024-12-09 23:03:31.755459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.472 pt4 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.472 "name": "raid_bdev1", 00:21:56.472 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:56.472 "strip_size_kb": 64, 00:21:56.472 "state": "online", 00:21:56.472 "raid_level": "raid0", 00:21:56.472 "superblock": true, 00:21:56.472 "num_base_bdevs": 4, 00:21:56.472 "num_base_bdevs_discovered": 4, 00:21:56.472 "num_base_bdevs_operational": 4, 00:21:56.472 "base_bdevs_list": [ 00:21:56.472 { 00:21:56.472 "name": "pt1", 00:21:56.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.472 "is_configured": true, 00:21:56.472 "data_offset": 2048, 00:21:56.472 "data_size": 63488 00:21:56.472 }, 00:21:56.472 { 00:21:56.472 "name": "pt2", 00:21:56.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.472 "is_configured": true, 00:21:56.472 "data_offset": 2048, 00:21:56.472 "data_size": 63488 00:21:56.472 }, 00:21:56.472 { 00:21:56.472 "name": "pt3", 00:21:56.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.472 "is_configured": true, 00:21:56.472 "data_offset": 2048, 00:21:56.472 "data_size": 63488 00:21:56.472 }, 00:21:56.472 { 00:21:56.472 "name": "pt4", 00:21:56.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:56.472 "is_configured": true, 00:21:56.472 "data_offset": 2048, 00:21:56.472 "data_size": 63488 00:21:56.472 } 00:21:56.472 ] 00:21:56.472 }' 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.472 23:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.734 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.996 [2024-12-09 23:03:32.098343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:56.996 "name": "raid_bdev1", 00:21:56.996 "aliases": [ 00:21:56.996 "5a8dff4a-38b0-49ff-b689-55e0b31604f9" 00:21:56.996 ], 00:21:56.996 "product_name": "Raid Volume", 00:21:56.996 "block_size": 512, 00:21:56.996 "num_blocks": 253952, 00:21:56.996 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:56.996 "assigned_rate_limits": { 00:21:56.996 "rw_ios_per_sec": 0, 00:21:56.996 "rw_mbytes_per_sec": 0, 00:21:56.996 "r_mbytes_per_sec": 0, 00:21:56.996 "w_mbytes_per_sec": 0 00:21:56.996 }, 00:21:56.996 "claimed": false, 00:21:56.996 "zoned": false, 00:21:56.996 "supported_io_types": { 00:21:56.996 "read": true, 00:21:56.996 "write": true, 00:21:56.996 "unmap": true, 00:21:56.996 "flush": true, 00:21:56.996 "reset": true, 00:21:56.996 "nvme_admin": false, 00:21:56.996 "nvme_io": false, 00:21:56.996 "nvme_io_md": false, 00:21:56.996 "write_zeroes": true, 00:21:56.996 "zcopy": false, 00:21:56.996 "get_zone_info": false, 00:21:56.996 "zone_management": false, 00:21:56.996 "zone_append": false, 00:21:56.996 "compare": false, 00:21:56.996 "compare_and_write": false, 00:21:56.996 "abort": false, 00:21:56.996 "seek_hole": false, 00:21:56.996 "seek_data": false, 00:21:56.996 "copy": false, 00:21:56.996 "nvme_iov_md": false 00:21:56.996 }, 00:21:56.996 "memory_domains": [ 00:21:56.996 { 00:21:56.996 "dma_device_id": "system", 00:21:56.996 "dma_device_type": 1 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.996 "dma_device_type": 2 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "system", 00:21:56.996 "dma_device_type": 1 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.996 "dma_device_type": 2 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "system", 00:21:56.996 "dma_device_type": 1 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.996 "dma_device_type": 2 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "system", 00:21:56.996 "dma_device_type": 1 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.996 "dma_device_type": 2 00:21:56.996 } 00:21:56.996 ], 00:21:56.996 "driver_specific": { 00:21:56.996 "raid": { 00:21:56.996 "uuid": "5a8dff4a-38b0-49ff-b689-55e0b31604f9", 00:21:56.996 "strip_size_kb": 64, 00:21:56.996 "state": "online", 00:21:56.996 "raid_level": "raid0", 00:21:56.996 "superblock": true, 00:21:56.996 "num_base_bdevs": 4, 00:21:56.996 "num_base_bdevs_discovered": 4, 00:21:56.996 "num_base_bdevs_operational": 4, 00:21:56.996 "base_bdevs_list": [ 00:21:56.996 { 00:21:56.996 "name": "pt1", 00:21:56.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.996 "is_configured": true, 00:21:56.996 "data_offset": 2048, 00:21:56.996 "data_size": 63488 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "name": "pt2", 00:21:56.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.996 "is_configured": true, 00:21:56.996 "data_offset": 2048, 00:21:56.996 "data_size": 63488 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "name": "pt3", 00:21:56.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.996 "is_configured": true, 00:21:56.996 "data_offset": 2048, 00:21:56.996 "data_size": 63488 00:21:56.996 }, 00:21:56.996 { 00:21:56.996 "name": "pt4", 00:21:56.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:56.996 "is_configured": true, 00:21:56.996 "data_offset": 2048, 00:21:56.996 "data_size": 63488 00:21:56.996 } 00:21:56.996 ] 00:21:56.996 } 00:21:56.996 } 00:21:56.996 }' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:56.996 pt2 00:21:56.996 pt3 00:21:56.996 pt4' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.996 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.997 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:57.260 [2024-12-09 23:03:32.378395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5a8dff4a-38b0-49ff-b689-55e0b31604f9 '!=' 5a8dff4a-38b0-49ff-b689-55e0b31604f9 ']' 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:57.260 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68954 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68954 ']' 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68954 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68954 00:21:57.261 killing process with pid 68954 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68954' 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68954 00:21:57.261 23:03:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68954 00:21:57.261 [2024-12-09 23:03:32.434341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.261 [2024-12-09 23:03:32.434451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.261 [2024-12-09 23:03:32.434545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.261 [2024-12-09 23:03:32.434555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:57.522 [2024-12-09 23:03:32.720561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.547 23:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:58.547 ************************************ 00:21:58.547 END TEST raid_superblock_test 00:21:58.547 ************************************ 00:21:58.547 00:21:58.547 real 0m4.642s 00:21:58.547 user 0m6.437s 00:21:58.547 sys 0m0.929s 00:21:58.547 23:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.547 23:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.547 23:03:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:21:58.547 23:03:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:58.547 23:03:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.547 23:03:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:58.547 ************************************ 00:21:58.547 START TEST raid_read_error_test 00:21:58.547 ************************************ 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TgYExA5y9p 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69213 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69213 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69213 ']' 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.547 23:03:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.547 [2024-12-09 23:03:33.728226] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:21:58.547 [2024-12-09 23:03:33.728380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69213 ] 00:21:58.830 [2024-12-09 23:03:33.921517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.830 [2024-12-09 23:03:34.108995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.090 [2024-12-09 23:03:34.277005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.090 [2024-12-09 23:03:34.277329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.350 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.350 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:59.350 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.351 BaseBdev1_malloc 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.351 true 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.351 [2024-12-09 23:03:34.665509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:59.351 [2024-12-09 23:03:34.665592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.351 [2024-12-09 23:03:34.665620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:59.351 [2024-12-09 23:03:34.665634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.351 [2024-12-09 23:03:34.668251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.351 [2024-12-09 23:03:34.668309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:59.351 BaseBdev1 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.351 BaseBdev2_malloc 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.351 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 true 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 [2024-12-09 23:03:34.719747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:59.611 [2024-12-09 23:03:34.719999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.611 [2024-12-09 23:03:34.720030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:59.611 [2024-12-09 23:03:34.720042] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.611 [2024-12-09 23:03:34.722584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.611 [2024-12-09 23:03:34.722633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:59.611 BaseBdev2 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 BaseBdev3_malloc 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 true 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 [2024-12-09 23:03:34.780459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:59.611 [2024-12-09 23:03:34.780539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.611 [2024-12-09 23:03:34.780562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:59.611 [2024-12-09 23:03:34.780574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.611 [2024-12-09 23:03:34.783726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.611 [2024-12-09 23:03:34.783793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:59.611 BaseBdev3 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.611 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.611 BaseBdev4_malloc 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.612 true 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.612 [2024-12-09 23:03:34.838555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:59.612 [2024-12-09 23:03:34.838631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.612 [2024-12-09 23:03:34.838656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:59.612 [2024-12-09 23:03:34.838668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.612 [2024-12-09 23:03:34.841333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.612 [2024-12-09 23:03:34.841391] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:59.612 BaseBdev4 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.612 [2024-12-09 23:03:34.846682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.612 [2024-12-09 23:03:34.849019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:59.612 [2024-12-09 23:03:34.849338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.612 [2024-12-09 23:03:34.849443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:59.612 [2024-12-09 23:03:34.849712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:59.612 [2024-12-09 23:03:34.849734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:59.612 [2024-12-09 23:03:34.850056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:59.612 [2024-12-09 23:03:34.850271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:59.612 [2024-12-09 23:03:34.850286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:59.612 [2024-12-09 23:03:34.850554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.612 "name": "raid_bdev1", 00:21:59.612 "uuid": "2064cf23-dc8d-4c17-8929-608296e14f91", 00:21:59.612 "strip_size_kb": 64, 00:21:59.612 "state": "online", 00:21:59.612 "raid_level": "raid0", 00:21:59.612 "superblock": true, 00:21:59.612 "num_base_bdevs": 4, 00:21:59.612 "num_base_bdevs_discovered": 4, 00:21:59.612 "num_base_bdevs_operational": 4, 00:21:59.612 "base_bdevs_list": [ 00:21:59.612 { 00:21:59.612 "name": "BaseBdev1", 00:21:59.612 "uuid": "2379ebe8-87ac-53db-940c-cf22964b00cb", 00:21:59.612 "is_configured": true, 00:21:59.612 "data_offset": 2048, 00:21:59.612 "data_size": 63488 00:21:59.612 }, 00:21:59.612 { 00:21:59.612 "name": "BaseBdev2", 00:21:59.612 "uuid": "eca84645-9309-543b-b183-4ae60cce12d7", 00:21:59.612 "is_configured": true, 00:21:59.612 "data_offset": 2048, 00:21:59.612 "data_size": 63488 00:21:59.612 }, 00:21:59.612 { 00:21:59.612 "name": "BaseBdev3", 00:21:59.612 "uuid": "2d053048-f154-5e6b-97fa-06570346d590", 00:21:59.612 "is_configured": true, 00:21:59.612 "data_offset": 2048, 00:21:59.612 "data_size": 63488 00:21:59.612 }, 00:21:59.612 { 00:21:59.612 "name": "BaseBdev4", 00:21:59.612 "uuid": "915579e3-fcdf-5da0-9655-996d71c574c1", 00:21:59.612 "is_configured": true, 00:21:59.612 "data_offset": 2048, 00:21:59.612 "data_size": 63488 00:21:59.612 } 00:21:59.612 ] 00:21:59.612 }' 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.612 23:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.871 23:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:59.871 23:03:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:00.132 [2024-12-09 23:03:35.292541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.109 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.109 "name": "raid_bdev1", 00:22:01.109 "uuid": "2064cf23-dc8d-4c17-8929-608296e14f91", 00:22:01.109 "strip_size_kb": 64, 00:22:01.109 "state": "online", 00:22:01.109 "raid_level": "raid0", 00:22:01.109 "superblock": true, 00:22:01.109 "num_base_bdevs": 4, 00:22:01.109 "num_base_bdevs_discovered": 4, 00:22:01.109 "num_base_bdevs_operational": 4, 00:22:01.109 "base_bdevs_list": [ 00:22:01.109 { 00:22:01.109 "name": "BaseBdev1", 00:22:01.109 "uuid": "2379ebe8-87ac-53db-940c-cf22964b00cb", 00:22:01.109 "is_configured": true, 00:22:01.109 "data_offset": 2048, 00:22:01.109 "data_size": 63488 00:22:01.109 }, 00:22:01.109 { 00:22:01.109 "name": "BaseBdev2", 00:22:01.109 "uuid": "eca84645-9309-543b-b183-4ae60cce12d7", 00:22:01.109 "is_configured": true, 00:22:01.109 "data_offset": 2048, 00:22:01.109 "data_size": 63488 00:22:01.109 }, 00:22:01.109 { 00:22:01.109 "name": "BaseBdev3", 00:22:01.109 "uuid": "2d053048-f154-5e6b-97fa-06570346d590", 00:22:01.109 "is_configured": true, 00:22:01.109 "data_offset": 2048, 00:22:01.109 "data_size": 63488 00:22:01.109 }, 00:22:01.109 { 00:22:01.109 "name": "BaseBdev4", 00:22:01.109 "uuid": "915579e3-fcdf-5da0-9655-996d71c574c1", 00:22:01.109 "is_configured": true, 00:22:01.109 "data_offset": 2048, 00:22:01.109 "data_size": 63488 00:22:01.109 } 00:22:01.109 ] 00:22:01.109 }' 00:22:01.110 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.110 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.380 [2024-12-09 23:03:36.569949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:01.380 [2024-12-09 23:03:36.570000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:01.380 [2024-12-09 23:03:36.573346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.380 [2024-12-09 23:03:36.573428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.380 [2024-12-09 23:03:36.573483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:01.380 [2024-12-09 23:03:36.573496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:01.380 { 00:22:01.380 "results": [ 00:22:01.380 { 00:22:01.380 "job": "raid_bdev1", 00:22:01.380 "core_mask": "0x1", 00:22:01.380 "workload": "randrw", 00:22:01.380 "percentage": 50, 00:22:01.380 "status": "finished", 00:22:01.380 "queue_depth": 1, 00:22:01.380 "io_size": 131072, 00:22:01.380 "runtime": 1.273955, 00:22:01.380 "iops": 11345.76967004329, 00:22:01.380 "mibps": 1418.2212087554112, 00:22:01.380 "io_failed": 1, 00:22:01.380 "io_timeout": 0, 00:22:01.380 "avg_latency_us": 122.31751930394061, 00:22:01.380 "min_latency_us": 34.26461538461538, 00:22:01.380 "max_latency_us": 1751.8276923076924 00:22:01.380 } 00:22:01.380 ], 00:22:01.380 "core_count": 1 00:22:01.380 } 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69213 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69213 ']' 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69213 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69213 00:22:01.380 killing process with pid 69213 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69213' 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69213 00:22:01.380 23:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69213 00:22:01.380 [2024-12-09 23:03:36.603828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:01.643 [2024-12-09 23:03:36.840858] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TgYExA5y9p 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:22:02.586 00:22:02.586 real 0m4.091s 00:22:02.586 user 0m4.710s 00:22:02.586 sys 0m0.581s 00:22:02.586 ************************************ 00:22:02.586 END TEST raid_read_error_test 00:22:02.586 ************************************ 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.586 23:03:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.586 23:03:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:22:02.586 23:03:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:02.586 23:03:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.586 23:03:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:02.586 ************************************ 00:22:02.586 START TEST raid_write_error_test 00:22:02.586 ************************************ 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y6UquF9Xby 00:22:02.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69348 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69348 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69348 ']' 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.586 23:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:02.586 [2024-12-09 23:03:37.898516] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:02.586 [2024-12-09 23:03:37.898671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69348 ] 00:22:02.846 [2024-12-09 23:03:38.061083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.847 [2024-12-09 23:03:38.207368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.165 [2024-12-09 23:03:38.378965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:03.165 [2024-12-09 23:03:38.379041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 BaseBdev1_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 true 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 [2024-12-09 23:03:38.842715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:03.739 [2024-12-09 23:03:38.842804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.739 [2024-12-09 23:03:38.842831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:03.739 [2024-12-09 23:03:38.842844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.739 [2024-12-09 23:03:38.845789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.739 [2024-12-09 23:03:38.846053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:03.739 BaseBdev1 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 BaseBdev2_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 true 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 [2024-12-09 23:03:38.894390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:03.739 [2024-12-09 23:03:38.894470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.739 [2024-12-09 23:03:38.894492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:03.739 [2024-12-09 23:03:38.894503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.739 [2024-12-09 23:03:38.897285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.739 [2024-12-09 23:03:38.897352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:03.739 BaseBdev2 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 BaseBdev3_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 true 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 [2024-12-09 23:03:38.970088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:03.739 [2024-12-09 23:03:38.970198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.739 [2024-12-09 23:03:38.970232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:03.739 [2024-12-09 23:03:38.970250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.739 [2024-12-09 23:03:38.972926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.739 [2024-12-09 23:03:38.972978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:03.739 BaseBdev3 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 BaseBdev4_malloc 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 true 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 [2024-12-09 23:03:39.024560] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:03.739 [2024-12-09 23:03:39.024849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.739 [2024-12-09 23:03:39.024894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:03.739 [2024-12-09 23:03:39.024910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.739 [2024-12-09 23:03:39.027552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.739 [2024-12-09 23:03:39.027612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:03.739 BaseBdev4 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 [2024-12-09 23:03:39.036657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.739 [2024-12-09 23:03:39.038912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.739 [2024-12-09 23:03:39.039190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.739 [2024-12-09 23:03:39.039281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:03.739 [2024-12-09 23:03:39.039547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:03.739 [2024-12-09 23:03:39.039568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:03.739 [2024-12-09 23:03:39.039899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:03.739 [2024-12-09 23:03:39.040094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:03.739 [2024-12-09 23:03:39.040131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:03.739 [2024-12-09 23:03:39.040336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.739 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.739 "name": "raid_bdev1", 00:22:03.739 "uuid": "d4b101ff-0a85-4e1c-8854-025e853e477b", 00:22:03.739 "strip_size_kb": 64, 00:22:03.739 "state": "online", 00:22:03.739 "raid_level": "raid0", 00:22:03.739 "superblock": true, 00:22:03.739 "num_base_bdevs": 4, 00:22:03.740 "num_base_bdevs_discovered": 4, 00:22:03.740 "num_base_bdevs_operational": 4, 00:22:03.740 "base_bdevs_list": [ 00:22:03.740 { 00:22:03.740 "name": "BaseBdev1", 00:22:03.740 "uuid": "88cf67d4-f91e-5b4c-ad76-c51ea91a6927", 00:22:03.740 "is_configured": true, 00:22:03.740 "data_offset": 2048, 00:22:03.740 "data_size": 63488 00:22:03.740 }, 00:22:03.740 { 00:22:03.740 "name": "BaseBdev2", 00:22:03.740 "uuid": "7a0ef559-79ef-5e1c-ac74-f65f3c7c3ce8", 00:22:03.740 "is_configured": true, 00:22:03.740 "data_offset": 2048, 00:22:03.740 "data_size": 63488 00:22:03.740 }, 00:22:03.740 { 00:22:03.740 "name": "BaseBdev3", 00:22:03.740 "uuid": "c65d009f-86ad-5eb5-9e07-d6e4c01d90dc", 00:22:03.740 "is_configured": true, 00:22:03.740 "data_offset": 2048, 00:22:03.740 "data_size": 63488 00:22:03.740 }, 00:22:03.740 { 00:22:03.740 "name": "BaseBdev4", 00:22:03.740 "uuid": "af15de89-8fbd-5def-976d-cae02c57daeb", 00:22:03.740 "is_configured": true, 00:22:03.740 "data_offset": 2048, 00:22:03.740 "data_size": 63488 00:22:03.740 } 00:22:03.740 ] 00:22:03.740 }' 00:22:03.740 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.740 23:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.311 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:04.311 23:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:04.311 [2024-12-09 23:03:39.477874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:05.254 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:05.254 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.255 "name": "raid_bdev1", 00:22:05.255 "uuid": "d4b101ff-0a85-4e1c-8854-025e853e477b", 00:22:05.255 "strip_size_kb": 64, 00:22:05.255 "state": "online", 00:22:05.255 "raid_level": "raid0", 00:22:05.255 "superblock": true, 00:22:05.255 "num_base_bdevs": 4, 00:22:05.255 "num_base_bdevs_discovered": 4, 00:22:05.255 "num_base_bdevs_operational": 4, 00:22:05.255 "base_bdevs_list": [ 00:22:05.255 { 00:22:05.255 "name": "BaseBdev1", 00:22:05.255 "uuid": "88cf67d4-f91e-5b4c-ad76-c51ea91a6927", 00:22:05.255 "is_configured": true, 00:22:05.255 "data_offset": 2048, 00:22:05.255 "data_size": 63488 00:22:05.255 }, 00:22:05.255 { 00:22:05.255 "name": "BaseBdev2", 00:22:05.255 "uuid": "7a0ef559-79ef-5e1c-ac74-f65f3c7c3ce8", 00:22:05.255 "is_configured": true, 00:22:05.255 "data_offset": 2048, 00:22:05.255 "data_size": 63488 00:22:05.255 }, 00:22:05.255 { 00:22:05.255 "name": "BaseBdev3", 00:22:05.255 "uuid": "c65d009f-86ad-5eb5-9e07-d6e4c01d90dc", 00:22:05.255 "is_configured": true, 00:22:05.255 "data_offset": 2048, 00:22:05.255 "data_size": 63488 00:22:05.255 }, 00:22:05.255 { 00:22:05.255 "name": "BaseBdev4", 00:22:05.255 "uuid": "af15de89-8fbd-5def-976d-cae02c57daeb", 00:22:05.255 "is_configured": true, 00:22:05.255 "data_offset": 2048, 00:22:05.255 "data_size": 63488 00:22:05.255 } 00:22:05.255 ] 00:22:05.255 }' 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.255 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.515 [2024-12-09 23:03:40.733606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.515 [2024-12-09 23:03:40.733650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.515 [2024-12-09 23:03:40.736955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.515 [2024-12-09 23:03:40.737033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.515 [2024-12-09 23:03:40.737085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.515 [2024-12-09 23:03:40.737111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:05.515 { 00:22:05.515 "results": [ 00:22:05.515 { 00:22:05.515 "job": "raid_bdev1", 00:22:05.515 "core_mask": "0x1", 00:22:05.515 "workload": "randrw", 00:22:05.515 "percentage": 50, 00:22:05.515 "status": "finished", 00:22:05.515 "queue_depth": 1, 00:22:05.515 "io_size": 131072, 00:22:05.515 "runtime": 1.253599, 00:22:05.515 "iops": 11828.343832437646, 00:22:05.515 "mibps": 1478.5429790547057, 00:22:05.515 "io_failed": 1, 00:22:05.515 "io_timeout": 0, 00:22:05.515 "avg_latency_us": 117.2523211793938, 00:22:05.515 "min_latency_us": 34.855384615384615, 00:22:05.515 "max_latency_us": 1852.6523076923077 00:22:05.515 } 00:22:05.515 ], 00:22:05.515 "core_count": 1 00:22:05.515 } 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69348 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69348 ']' 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69348 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69348 00:22:05.515 killing process with pid 69348 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69348' 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69348 00:22:05.515 23:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69348 00:22:05.515 [2024-12-09 23:03:40.768383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.776 [2024-12-09 23:03:41.003478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y6UquF9Xby 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:22:06.719 00:22:06.719 real 0m4.094s 00:22:06.719 user 0m4.697s 00:22:06.719 sys 0m0.593s 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.719 ************************************ 00:22:06.719 END TEST raid_write_error_test 00:22:06.719 ************************************ 00:22:06.719 23:03:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.719 23:03:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:06.719 23:03:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:22:06.719 23:03:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:06.719 23:03:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.719 23:03:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.719 ************************************ 00:22:06.719 START TEST raid_state_function_test 00:22:06.719 ************************************ 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:06.719 Process raid pid: 69486 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69486 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69486' 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69486 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69486 ']' 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:06.719 23:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.719 [2024-12-09 23:03:42.060077] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:06.719 [2024-12-09 23:03:42.060521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.981 [2024-12-09 23:03:42.223582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.246 [2024-12-09 23:03:42.370989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.246 [2024-12-09 23:03:42.543799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.246 [2024-12-09 23:03:42.544110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.818 [2024-12-09 23:03:42.958389] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.818 [2024-12-09 23:03:42.958473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.818 [2024-12-09 23:03:42.958486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.818 [2024-12-09 23:03:42.958497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.818 [2024-12-09 23:03:42.958503] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:07.818 [2024-12-09 23:03:42.958513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:07.818 [2024-12-09 23:03:42.958520] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:07.818 [2024-12-09 23:03:42.958529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.818 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.818 "name": "Existed_Raid", 00:22:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.818 "strip_size_kb": 64, 00:22:07.818 "state": "configuring", 00:22:07.818 "raid_level": "concat", 00:22:07.818 "superblock": false, 00:22:07.818 "num_base_bdevs": 4, 00:22:07.818 "num_base_bdevs_discovered": 0, 00:22:07.818 "num_base_bdevs_operational": 4, 00:22:07.818 "base_bdevs_list": [ 00:22:07.818 { 00:22:07.818 "name": "BaseBdev1", 00:22:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.818 "is_configured": false, 00:22:07.818 "data_offset": 0, 00:22:07.818 "data_size": 0 00:22:07.818 }, 00:22:07.818 { 00:22:07.818 "name": "BaseBdev2", 00:22:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.818 "is_configured": false, 00:22:07.818 "data_offset": 0, 00:22:07.818 "data_size": 0 00:22:07.818 }, 00:22:07.818 { 00:22:07.818 "name": "BaseBdev3", 00:22:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.818 "is_configured": false, 00:22:07.818 "data_offset": 0, 00:22:07.818 "data_size": 0 00:22:07.818 }, 00:22:07.818 { 00:22:07.818 "name": "BaseBdev4", 00:22:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.819 "is_configured": false, 00:22:07.819 "data_offset": 0, 00:22:07.819 "data_size": 0 00:22:07.819 } 00:22:07.819 ] 00:22:07.819 }' 00:22:07.819 23:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.819 23:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.078 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.078 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.079 [2024-12-09 23:03:43.310411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.079 [2024-12-09 23:03:43.310468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.079 [2024-12-09 23:03:43.318435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.079 [2024-12-09 23:03:43.318498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.079 [2024-12-09 23:03:43.318508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.079 [2024-12-09 23:03:43.318520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.079 [2024-12-09 23:03:43.318528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:08.079 [2024-12-09 23:03:43.318538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:08.079 [2024-12-09 23:03:43.318545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:08.079 [2024-12-09 23:03:43.318556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.079 [2024-12-09 23:03:43.357296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.079 BaseBdev1 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.079 [ 00:22:08.079 { 00:22:08.079 "name": "BaseBdev1", 00:22:08.079 "aliases": [ 00:22:08.079 "5a7d4c07-9d09-4576-87d1-a867c526b811" 00:22:08.079 ], 00:22:08.079 "product_name": "Malloc disk", 00:22:08.079 "block_size": 512, 00:22:08.079 "num_blocks": 65536, 00:22:08.079 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:08.079 "assigned_rate_limits": { 00:22:08.079 "rw_ios_per_sec": 0, 00:22:08.079 "rw_mbytes_per_sec": 0, 00:22:08.079 "r_mbytes_per_sec": 0, 00:22:08.079 "w_mbytes_per_sec": 0 00:22:08.079 }, 00:22:08.079 "claimed": true, 00:22:08.079 "claim_type": "exclusive_write", 00:22:08.079 "zoned": false, 00:22:08.079 "supported_io_types": { 00:22:08.079 "read": true, 00:22:08.079 "write": true, 00:22:08.079 "unmap": true, 00:22:08.079 "flush": true, 00:22:08.079 "reset": true, 00:22:08.079 "nvme_admin": false, 00:22:08.079 "nvme_io": false, 00:22:08.079 "nvme_io_md": false, 00:22:08.079 "write_zeroes": true, 00:22:08.079 "zcopy": true, 00:22:08.079 "get_zone_info": false, 00:22:08.079 "zone_management": false, 00:22:08.079 "zone_append": false, 00:22:08.079 "compare": false, 00:22:08.079 "compare_and_write": false, 00:22:08.079 "abort": true, 00:22:08.079 "seek_hole": false, 00:22:08.079 "seek_data": false, 00:22:08.079 "copy": true, 00:22:08.079 "nvme_iov_md": false 00:22:08.079 }, 00:22:08.079 "memory_domains": [ 00:22:08.079 { 00:22:08.079 "dma_device_id": "system", 00:22:08.079 "dma_device_type": 1 00:22:08.079 }, 00:22:08.079 { 00:22:08.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.079 "dma_device_type": 2 00:22:08.079 } 00:22:08.079 ], 00:22:08.079 "driver_specific": {} 00:22:08.079 } 00:22:08.079 ] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.079 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.079 "name": "Existed_Raid", 00:22:08.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.079 "strip_size_kb": 64, 00:22:08.079 "state": "configuring", 00:22:08.079 "raid_level": "concat", 00:22:08.079 "superblock": false, 00:22:08.079 "num_base_bdevs": 4, 00:22:08.079 "num_base_bdevs_discovered": 1, 00:22:08.079 "num_base_bdevs_operational": 4, 00:22:08.079 "base_bdevs_list": [ 00:22:08.079 { 00:22:08.079 "name": "BaseBdev1", 00:22:08.079 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:08.079 "is_configured": true, 00:22:08.079 "data_offset": 0, 00:22:08.079 "data_size": 65536 00:22:08.079 }, 00:22:08.079 { 00:22:08.079 "name": "BaseBdev2", 00:22:08.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.080 "is_configured": false, 00:22:08.080 "data_offset": 0, 00:22:08.080 "data_size": 0 00:22:08.080 }, 00:22:08.080 { 00:22:08.080 "name": "BaseBdev3", 00:22:08.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.080 "is_configured": false, 00:22:08.080 "data_offset": 0, 00:22:08.080 "data_size": 0 00:22:08.080 }, 00:22:08.080 { 00:22:08.080 "name": "BaseBdev4", 00:22:08.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.080 "is_configured": false, 00:22:08.080 "data_offset": 0, 00:22:08.080 "data_size": 0 00:22:08.080 } 00:22:08.080 ] 00:22:08.080 }' 00:22:08.080 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.080 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.652 [2024-12-09 23:03:43.709505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.652 [2024-12-09 23:03:43.709599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.652 [2024-12-09 23:03:43.717525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.652 [2024-12-09 23:03:43.719823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.652 [2024-12-09 23:03:43.719884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.652 [2024-12-09 23:03:43.719895] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:08.652 [2024-12-09 23:03:43.719907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:08.652 [2024-12-09 23:03:43.719914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:08.652 [2024-12-09 23:03:43.719923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.652 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.652 "name": "Existed_Raid", 00:22:08.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.652 "strip_size_kb": 64, 00:22:08.652 "state": "configuring", 00:22:08.652 "raid_level": "concat", 00:22:08.652 "superblock": false, 00:22:08.652 "num_base_bdevs": 4, 00:22:08.652 "num_base_bdevs_discovered": 1, 00:22:08.652 "num_base_bdevs_operational": 4, 00:22:08.652 "base_bdevs_list": [ 00:22:08.652 { 00:22:08.652 "name": "BaseBdev1", 00:22:08.652 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:08.652 "is_configured": true, 00:22:08.652 "data_offset": 0, 00:22:08.653 "data_size": 65536 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "name": "BaseBdev2", 00:22:08.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.653 "is_configured": false, 00:22:08.653 "data_offset": 0, 00:22:08.653 "data_size": 0 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "name": "BaseBdev3", 00:22:08.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.653 "is_configured": false, 00:22:08.653 "data_offset": 0, 00:22:08.653 "data_size": 0 00:22:08.653 }, 00:22:08.653 { 00:22:08.653 "name": "BaseBdev4", 00:22:08.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.653 "is_configured": false, 00:22:08.653 "data_offset": 0, 00:22:08.653 "data_size": 0 00:22:08.653 } 00:22:08.653 ] 00:22:08.653 }' 00:22:08.653 23:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.653 23:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.913 [2024-12-09 23:03:44.090057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.913 BaseBdev2 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.913 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.913 [ 00:22:08.913 { 00:22:08.913 "name": "BaseBdev2", 00:22:08.913 "aliases": [ 00:22:08.913 "c9d4e533-9a68-408f-b022-4c04b2513bb8" 00:22:08.913 ], 00:22:08.913 "product_name": "Malloc disk", 00:22:08.913 "block_size": 512, 00:22:08.913 "num_blocks": 65536, 00:22:08.913 "uuid": "c9d4e533-9a68-408f-b022-4c04b2513bb8", 00:22:08.913 "assigned_rate_limits": { 00:22:08.913 "rw_ios_per_sec": 0, 00:22:08.913 "rw_mbytes_per_sec": 0, 00:22:08.913 "r_mbytes_per_sec": 0, 00:22:08.913 "w_mbytes_per_sec": 0 00:22:08.913 }, 00:22:08.913 "claimed": true, 00:22:08.913 "claim_type": "exclusive_write", 00:22:08.913 "zoned": false, 00:22:08.913 "supported_io_types": { 00:22:08.913 "read": true, 00:22:08.913 "write": true, 00:22:08.913 "unmap": true, 00:22:08.914 "flush": true, 00:22:08.914 "reset": true, 00:22:08.914 "nvme_admin": false, 00:22:08.914 "nvme_io": false, 00:22:08.914 "nvme_io_md": false, 00:22:08.914 "write_zeroes": true, 00:22:08.914 "zcopy": true, 00:22:08.914 "get_zone_info": false, 00:22:08.914 "zone_management": false, 00:22:08.914 "zone_append": false, 00:22:08.914 "compare": false, 00:22:08.914 "compare_and_write": false, 00:22:08.914 "abort": true, 00:22:08.914 "seek_hole": false, 00:22:08.914 "seek_data": false, 00:22:08.914 "copy": true, 00:22:08.914 "nvme_iov_md": false 00:22:08.914 }, 00:22:08.914 "memory_domains": [ 00:22:08.914 { 00:22:08.914 "dma_device_id": "system", 00:22:08.914 "dma_device_type": 1 00:22:08.914 }, 00:22:08.914 { 00:22:08.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.914 "dma_device_type": 2 00:22:08.914 } 00:22:08.914 ], 00:22:08.914 "driver_specific": {} 00:22:08.914 } 00:22:08.914 ] 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.914 "name": "Existed_Raid", 00:22:08.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.914 "strip_size_kb": 64, 00:22:08.914 "state": "configuring", 00:22:08.914 "raid_level": "concat", 00:22:08.914 "superblock": false, 00:22:08.914 "num_base_bdevs": 4, 00:22:08.914 "num_base_bdevs_discovered": 2, 00:22:08.914 "num_base_bdevs_operational": 4, 00:22:08.914 "base_bdevs_list": [ 00:22:08.914 { 00:22:08.914 "name": "BaseBdev1", 00:22:08.914 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:08.914 "is_configured": true, 00:22:08.914 "data_offset": 0, 00:22:08.914 "data_size": 65536 00:22:08.914 }, 00:22:08.914 { 00:22:08.914 "name": "BaseBdev2", 00:22:08.914 "uuid": "c9d4e533-9a68-408f-b022-4c04b2513bb8", 00:22:08.914 "is_configured": true, 00:22:08.914 "data_offset": 0, 00:22:08.914 "data_size": 65536 00:22:08.914 }, 00:22:08.914 { 00:22:08.914 "name": "BaseBdev3", 00:22:08.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.914 "is_configured": false, 00:22:08.914 "data_offset": 0, 00:22:08.914 "data_size": 0 00:22:08.914 }, 00:22:08.914 { 00:22:08.914 "name": "BaseBdev4", 00:22:08.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.914 "is_configured": false, 00:22:08.914 "data_offset": 0, 00:22:08.914 "data_size": 0 00:22:08.914 } 00:22:08.914 ] 00:22:08.914 }' 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.914 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.173 [2024-12-09 23:03:44.500950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.173 BaseBdev3 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:09.173 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.174 [ 00:22:09.174 { 00:22:09.174 "name": "BaseBdev3", 00:22:09.174 "aliases": [ 00:22:09.174 "f4a539a6-c686-4902-869e-cf00272bb755" 00:22:09.174 ], 00:22:09.174 "product_name": "Malloc disk", 00:22:09.174 "block_size": 512, 00:22:09.174 "num_blocks": 65536, 00:22:09.174 "uuid": "f4a539a6-c686-4902-869e-cf00272bb755", 00:22:09.174 "assigned_rate_limits": { 00:22:09.174 "rw_ios_per_sec": 0, 00:22:09.174 "rw_mbytes_per_sec": 0, 00:22:09.174 "r_mbytes_per_sec": 0, 00:22:09.174 "w_mbytes_per_sec": 0 00:22:09.174 }, 00:22:09.174 "claimed": true, 00:22:09.174 "claim_type": "exclusive_write", 00:22:09.174 "zoned": false, 00:22:09.174 "supported_io_types": { 00:22:09.174 "read": true, 00:22:09.174 "write": true, 00:22:09.174 "unmap": true, 00:22:09.174 "flush": true, 00:22:09.174 "reset": true, 00:22:09.174 "nvme_admin": false, 00:22:09.174 "nvme_io": false, 00:22:09.174 "nvme_io_md": false, 00:22:09.174 "write_zeroes": true, 00:22:09.174 "zcopy": true, 00:22:09.174 "get_zone_info": false, 00:22:09.174 "zone_management": false, 00:22:09.174 "zone_append": false, 00:22:09.174 "compare": false, 00:22:09.174 "compare_and_write": false, 00:22:09.174 "abort": true, 00:22:09.174 "seek_hole": false, 00:22:09.174 "seek_data": false, 00:22:09.174 "copy": true, 00:22:09.174 "nvme_iov_md": false 00:22:09.174 }, 00:22:09.174 "memory_domains": [ 00:22:09.174 { 00:22:09.174 "dma_device_id": "system", 00:22:09.174 "dma_device_type": 1 00:22:09.174 }, 00:22:09.174 { 00:22:09.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.174 "dma_device_type": 2 00:22:09.174 } 00:22:09.174 ], 00:22:09.174 "driver_specific": {} 00:22:09.174 } 00:22:09.174 ] 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.174 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.435 "name": "Existed_Raid", 00:22:09.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.435 "strip_size_kb": 64, 00:22:09.435 "state": "configuring", 00:22:09.435 "raid_level": "concat", 00:22:09.435 "superblock": false, 00:22:09.435 "num_base_bdevs": 4, 00:22:09.435 "num_base_bdevs_discovered": 3, 00:22:09.435 "num_base_bdevs_operational": 4, 00:22:09.435 "base_bdevs_list": [ 00:22:09.435 { 00:22:09.435 "name": "BaseBdev1", 00:22:09.435 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:09.435 "is_configured": true, 00:22:09.435 "data_offset": 0, 00:22:09.435 "data_size": 65536 00:22:09.435 }, 00:22:09.435 { 00:22:09.435 "name": "BaseBdev2", 00:22:09.435 "uuid": "c9d4e533-9a68-408f-b022-4c04b2513bb8", 00:22:09.435 "is_configured": true, 00:22:09.435 "data_offset": 0, 00:22:09.435 "data_size": 65536 00:22:09.435 }, 00:22:09.435 { 00:22:09.435 "name": "BaseBdev3", 00:22:09.435 "uuid": "f4a539a6-c686-4902-869e-cf00272bb755", 00:22:09.435 "is_configured": true, 00:22:09.435 "data_offset": 0, 00:22:09.435 "data_size": 65536 00:22:09.435 }, 00:22:09.435 { 00:22:09.435 "name": "BaseBdev4", 00:22:09.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.435 "is_configured": false, 00:22:09.435 "data_offset": 0, 00:22:09.435 "data_size": 0 00:22:09.435 } 00:22:09.435 ] 00:22:09.435 }' 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.435 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.697 [2024-12-09 23:03:44.889257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.697 [2024-12-09 23:03:44.889328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:09.697 [2024-12-09 23:03:44.889337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:09.697 [2024-12-09 23:03:44.889649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:09.697 [2024-12-09 23:03:44.889817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:09.697 [2024-12-09 23:03:44.889837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:09.697 [2024-12-09 23:03:44.890190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.697 BaseBdev4 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.697 [ 00:22:09.697 { 00:22:09.697 "name": "BaseBdev4", 00:22:09.697 "aliases": [ 00:22:09.697 "f86fdaf7-5caf-42df-a1db-00d75d55892f" 00:22:09.697 ], 00:22:09.697 "product_name": "Malloc disk", 00:22:09.697 "block_size": 512, 00:22:09.697 "num_blocks": 65536, 00:22:09.697 "uuid": "f86fdaf7-5caf-42df-a1db-00d75d55892f", 00:22:09.697 "assigned_rate_limits": { 00:22:09.697 "rw_ios_per_sec": 0, 00:22:09.697 "rw_mbytes_per_sec": 0, 00:22:09.697 "r_mbytes_per_sec": 0, 00:22:09.697 "w_mbytes_per_sec": 0 00:22:09.697 }, 00:22:09.697 "claimed": true, 00:22:09.697 "claim_type": "exclusive_write", 00:22:09.697 "zoned": false, 00:22:09.697 "supported_io_types": { 00:22:09.697 "read": true, 00:22:09.697 "write": true, 00:22:09.697 "unmap": true, 00:22:09.697 "flush": true, 00:22:09.697 "reset": true, 00:22:09.697 "nvme_admin": false, 00:22:09.697 "nvme_io": false, 00:22:09.697 "nvme_io_md": false, 00:22:09.697 "write_zeroes": true, 00:22:09.697 "zcopy": true, 00:22:09.697 "get_zone_info": false, 00:22:09.697 "zone_management": false, 00:22:09.697 "zone_append": false, 00:22:09.697 "compare": false, 00:22:09.697 "compare_and_write": false, 00:22:09.697 "abort": true, 00:22:09.697 "seek_hole": false, 00:22:09.697 "seek_data": false, 00:22:09.697 "copy": true, 00:22:09.697 "nvme_iov_md": false 00:22:09.697 }, 00:22:09.697 "memory_domains": [ 00:22:09.697 { 00:22:09.697 "dma_device_id": "system", 00:22:09.697 "dma_device_type": 1 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.697 "dma_device_type": 2 00:22:09.697 } 00:22:09.697 ], 00:22:09.697 "driver_specific": {} 00:22:09.697 } 00:22:09.697 ] 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.697 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.698 "name": "Existed_Raid", 00:22:09.698 "uuid": "088da792-9a6b-4137-abf8-72a3c546c7cc", 00:22:09.698 "strip_size_kb": 64, 00:22:09.698 "state": "online", 00:22:09.698 "raid_level": "concat", 00:22:09.698 "superblock": false, 00:22:09.698 "num_base_bdevs": 4, 00:22:09.698 "num_base_bdevs_discovered": 4, 00:22:09.698 "num_base_bdevs_operational": 4, 00:22:09.698 "base_bdevs_list": [ 00:22:09.698 { 00:22:09.698 "name": "BaseBdev1", 00:22:09.698 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:09.698 "is_configured": true, 00:22:09.698 "data_offset": 0, 00:22:09.698 "data_size": 65536 00:22:09.698 }, 00:22:09.698 { 00:22:09.698 "name": "BaseBdev2", 00:22:09.698 "uuid": "c9d4e533-9a68-408f-b022-4c04b2513bb8", 00:22:09.698 "is_configured": true, 00:22:09.698 "data_offset": 0, 00:22:09.698 "data_size": 65536 00:22:09.698 }, 00:22:09.698 { 00:22:09.698 "name": "BaseBdev3", 00:22:09.698 "uuid": "f4a539a6-c686-4902-869e-cf00272bb755", 00:22:09.698 "is_configured": true, 00:22:09.698 "data_offset": 0, 00:22:09.698 "data_size": 65536 00:22:09.698 }, 00:22:09.698 { 00:22:09.698 "name": "BaseBdev4", 00:22:09.698 "uuid": "f86fdaf7-5caf-42df-a1db-00d75d55892f", 00:22:09.698 "is_configured": true, 00:22:09.698 "data_offset": 0, 00:22:09.698 "data_size": 65536 00:22:09.698 } 00:22:09.698 ] 00:22:09.698 }' 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.698 23:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.957 [2024-12-09 23:03:45.281830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.957 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.957 "name": "Existed_Raid", 00:22:09.957 "aliases": [ 00:22:09.957 "088da792-9a6b-4137-abf8-72a3c546c7cc" 00:22:09.957 ], 00:22:09.957 "product_name": "Raid Volume", 00:22:09.957 "block_size": 512, 00:22:09.957 "num_blocks": 262144, 00:22:09.957 "uuid": "088da792-9a6b-4137-abf8-72a3c546c7cc", 00:22:09.957 "assigned_rate_limits": { 00:22:09.957 "rw_ios_per_sec": 0, 00:22:09.957 "rw_mbytes_per_sec": 0, 00:22:09.957 "r_mbytes_per_sec": 0, 00:22:09.957 "w_mbytes_per_sec": 0 00:22:09.957 }, 00:22:09.957 "claimed": false, 00:22:09.957 "zoned": false, 00:22:09.957 "supported_io_types": { 00:22:09.957 "read": true, 00:22:09.957 "write": true, 00:22:09.957 "unmap": true, 00:22:09.957 "flush": true, 00:22:09.957 "reset": true, 00:22:09.957 "nvme_admin": false, 00:22:09.957 "nvme_io": false, 00:22:09.957 "nvme_io_md": false, 00:22:09.957 "write_zeroes": true, 00:22:09.957 "zcopy": false, 00:22:09.957 "get_zone_info": false, 00:22:09.957 "zone_management": false, 00:22:09.957 "zone_append": false, 00:22:09.957 "compare": false, 00:22:09.957 "compare_and_write": false, 00:22:09.957 "abort": false, 00:22:09.957 "seek_hole": false, 00:22:09.957 "seek_data": false, 00:22:09.957 "copy": false, 00:22:09.957 "nvme_iov_md": false 00:22:09.957 }, 00:22:09.957 "memory_domains": [ 00:22:09.957 { 00:22:09.957 "dma_device_id": "system", 00:22:09.957 "dma_device_type": 1 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.957 "dma_device_type": 2 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "system", 00:22:09.957 "dma_device_type": 1 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.957 "dma_device_type": 2 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "system", 00:22:09.957 "dma_device_type": 1 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.957 "dma_device_type": 2 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "system", 00:22:09.957 "dma_device_type": 1 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.957 "dma_device_type": 2 00:22:09.957 } 00:22:09.957 ], 00:22:09.957 "driver_specific": { 00:22:09.957 "raid": { 00:22:09.957 "uuid": "088da792-9a6b-4137-abf8-72a3c546c7cc", 00:22:09.957 "strip_size_kb": 64, 00:22:09.957 "state": "online", 00:22:09.957 "raid_level": "concat", 00:22:09.957 "superblock": false, 00:22:09.957 "num_base_bdevs": 4, 00:22:09.957 "num_base_bdevs_discovered": 4, 00:22:09.957 "num_base_bdevs_operational": 4, 00:22:09.957 "base_bdevs_list": [ 00:22:09.957 { 00:22:09.957 "name": "BaseBdev1", 00:22:09.957 "uuid": "5a7d4c07-9d09-4576-87d1-a867c526b811", 00:22:09.957 "is_configured": true, 00:22:09.957 "data_offset": 0, 00:22:09.957 "data_size": 65536 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "name": "BaseBdev2", 00:22:09.957 "uuid": "c9d4e533-9a68-408f-b022-4c04b2513bb8", 00:22:09.957 "is_configured": true, 00:22:09.957 "data_offset": 0, 00:22:09.957 "data_size": 65536 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "name": "BaseBdev3", 00:22:09.957 "uuid": "f4a539a6-c686-4902-869e-cf00272bb755", 00:22:09.957 "is_configured": true, 00:22:09.957 "data_offset": 0, 00:22:09.957 "data_size": 65536 00:22:09.957 }, 00:22:09.957 { 00:22:09.957 "name": "BaseBdev4", 00:22:09.957 "uuid": "f86fdaf7-5caf-42df-a1db-00d75d55892f", 00:22:09.957 "is_configured": true, 00:22:09.957 "data_offset": 0, 00:22:09.957 "data_size": 65536 00:22:09.957 } 00:22:09.957 ] 00:22:09.957 } 00:22:09.957 } 00:22:09.957 }' 00:22:09.958 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:10.218 BaseBdev2 00:22:10.218 BaseBdev3 00:22:10.218 BaseBdev4' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.218 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.218 [2024-12-09 23:03:45.545583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.218 [2024-12-09 23:03:45.545631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.218 [2024-12-09 23:03:45.545692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.480 "name": "Existed_Raid", 00:22:10.480 "uuid": "088da792-9a6b-4137-abf8-72a3c546c7cc", 00:22:10.480 "strip_size_kb": 64, 00:22:10.480 "state": "offline", 00:22:10.480 "raid_level": "concat", 00:22:10.480 "superblock": false, 00:22:10.480 "num_base_bdevs": 4, 00:22:10.480 "num_base_bdevs_discovered": 3, 00:22:10.480 "num_base_bdevs_operational": 3, 00:22:10.480 "base_bdevs_list": [ 00:22:10.480 { 00:22:10.480 "name": null, 00:22:10.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.480 "is_configured": false, 00:22:10.480 "data_offset": 0, 00:22:10.480 "data_size": 65536 00:22:10.480 }, 00:22:10.480 { 00:22:10.480 "name": "BaseBdev2", 00:22:10.480 "uuid": "c9d4e533-9a68-408f-b022-4c04b2513bb8", 00:22:10.480 "is_configured": true, 00:22:10.480 "data_offset": 0, 00:22:10.480 "data_size": 65536 00:22:10.480 }, 00:22:10.480 { 00:22:10.480 "name": "BaseBdev3", 00:22:10.480 "uuid": "f4a539a6-c686-4902-869e-cf00272bb755", 00:22:10.480 "is_configured": true, 00:22:10.480 "data_offset": 0, 00:22:10.480 "data_size": 65536 00:22:10.480 }, 00:22:10.480 { 00:22:10.480 "name": "BaseBdev4", 00:22:10.480 "uuid": "f86fdaf7-5caf-42df-a1db-00d75d55892f", 00:22:10.480 "is_configured": true, 00:22:10.480 "data_offset": 0, 00:22:10.480 "data_size": 65536 00:22:10.480 } 00:22:10.480 ] 00:22:10.480 }' 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.480 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.742 23:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.742 [2024-12-09 23:03:46.022439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.742 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 [2024-12-09 23:03:46.131793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 [2024-12-09 23:03:46.239717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:11.003 [2024-12-09 23:03:46.239788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.003 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.264 BaseBdev2 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.264 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.264 [ 00:22:11.264 { 00:22:11.264 "name": "BaseBdev2", 00:22:11.264 "aliases": [ 00:22:11.264 "866c0b09-3cdc-42ba-b60c-8475e05bb722" 00:22:11.264 ], 00:22:11.264 "product_name": "Malloc disk", 00:22:11.264 "block_size": 512, 00:22:11.264 "num_blocks": 65536, 00:22:11.264 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:11.264 "assigned_rate_limits": { 00:22:11.264 "rw_ios_per_sec": 0, 00:22:11.264 "rw_mbytes_per_sec": 0, 00:22:11.264 "r_mbytes_per_sec": 0, 00:22:11.264 "w_mbytes_per_sec": 0 00:22:11.264 }, 00:22:11.264 "claimed": false, 00:22:11.264 "zoned": false, 00:22:11.264 "supported_io_types": { 00:22:11.264 "read": true, 00:22:11.264 "write": true, 00:22:11.264 "unmap": true, 00:22:11.264 "flush": true, 00:22:11.264 "reset": true, 00:22:11.264 "nvme_admin": false, 00:22:11.264 "nvme_io": false, 00:22:11.264 "nvme_io_md": false, 00:22:11.264 "write_zeroes": true, 00:22:11.264 "zcopy": true, 00:22:11.264 "get_zone_info": false, 00:22:11.264 "zone_management": false, 00:22:11.264 "zone_append": false, 00:22:11.264 "compare": false, 00:22:11.264 "compare_and_write": false, 00:22:11.264 "abort": true, 00:22:11.264 "seek_hole": false, 00:22:11.264 "seek_data": false, 00:22:11.264 "copy": true, 00:22:11.264 "nvme_iov_md": false 00:22:11.264 }, 00:22:11.264 "memory_domains": [ 00:22:11.264 { 00:22:11.265 "dma_device_id": "system", 00:22:11.265 "dma_device_type": 1 00:22:11.265 }, 00:22:11.265 { 00:22:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.265 "dma_device_type": 2 00:22:11.265 } 00:22:11.265 ], 00:22:11.265 "driver_specific": {} 00:22:11.265 } 00:22:11.265 ] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 BaseBdev3 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 [ 00:22:11.265 { 00:22:11.265 "name": "BaseBdev3", 00:22:11.265 "aliases": [ 00:22:11.265 "622935b8-eeea-45ba-ab58-184a0076d263" 00:22:11.265 ], 00:22:11.265 "product_name": "Malloc disk", 00:22:11.265 "block_size": 512, 00:22:11.265 "num_blocks": 65536, 00:22:11.265 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:11.265 "assigned_rate_limits": { 00:22:11.265 "rw_ios_per_sec": 0, 00:22:11.265 "rw_mbytes_per_sec": 0, 00:22:11.265 "r_mbytes_per_sec": 0, 00:22:11.265 "w_mbytes_per_sec": 0 00:22:11.265 }, 00:22:11.265 "claimed": false, 00:22:11.265 "zoned": false, 00:22:11.265 "supported_io_types": { 00:22:11.265 "read": true, 00:22:11.265 "write": true, 00:22:11.265 "unmap": true, 00:22:11.265 "flush": true, 00:22:11.265 "reset": true, 00:22:11.265 "nvme_admin": false, 00:22:11.265 "nvme_io": false, 00:22:11.265 "nvme_io_md": false, 00:22:11.265 "write_zeroes": true, 00:22:11.265 "zcopy": true, 00:22:11.265 "get_zone_info": false, 00:22:11.265 "zone_management": false, 00:22:11.265 "zone_append": false, 00:22:11.265 "compare": false, 00:22:11.265 "compare_and_write": false, 00:22:11.265 "abort": true, 00:22:11.265 "seek_hole": false, 00:22:11.265 "seek_data": false, 00:22:11.265 "copy": true, 00:22:11.265 "nvme_iov_md": false 00:22:11.265 }, 00:22:11.265 "memory_domains": [ 00:22:11.265 { 00:22:11.265 "dma_device_id": "system", 00:22:11.265 "dma_device_type": 1 00:22:11.265 }, 00:22:11.265 { 00:22:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.265 "dma_device_type": 2 00:22:11.265 } 00:22:11.265 ], 00:22:11.265 "driver_specific": {} 00:22:11.265 } 00:22:11.265 ] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 BaseBdev4 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 [ 00:22:11.265 { 00:22:11.265 "name": "BaseBdev4", 00:22:11.265 "aliases": [ 00:22:11.265 "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3" 00:22:11.265 ], 00:22:11.265 "product_name": "Malloc disk", 00:22:11.265 "block_size": 512, 00:22:11.265 "num_blocks": 65536, 00:22:11.265 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:11.265 "assigned_rate_limits": { 00:22:11.265 "rw_ios_per_sec": 0, 00:22:11.265 "rw_mbytes_per_sec": 0, 00:22:11.265 "r_mbytes_per_sec": 0, 00:22:11.265 "w_mbytes_per_sec": 0 00:22:11.265 }, 00:22:11.265 "claimed": false, 00:22:11.265 "zoned": false, 00:22:11.265 "supported_io_types": { 00:22:11.265 "read": true, 00:22:11.265 "write": true, 00:22:11.265 "unmap": true, 00:22:11.265 "flush": true, 00:22:11.265 "reset": true, 00:22:11.265 "nvme_admin": false, 00:22:11.265 "nvme_io": false, 00:22:11.265 "nvme_io_md": false, 00:22:11.265 "write_zeroes": true, 00:22:11.265 "zcopy": true, 00:22:11.265 "get_zone_info": false, 00:22:11.265 "zone_management": false, 00:22:11.265 "zone_append": false, 00:22:11.265 "compare": false, 00:22:11.265 "compare_and_write": false, 00:22:11.265 "abort": true, 00:22:11.265 "seek_hole": false, 00:22:11.265 "seek_data": false, 00:22:11.265 "copy": true, 00:22:11.265 "nvme_iov_md": false 00:22:11.265 }, 00:22:11.265 "memory_domains": [ 00:22:11.265 { 00:22:11.265 "dma_device_id": "system", 00:22:11.265 "dma_device_type": 1 00:22:11.265 }, 00:22:11.265 { 00:22:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.265 "dma_device_type": 2 00:22:11.265 } 00:22:11.265 ], 00:22:11.265 "driver_specific": {} 00:22:11.265 } 00:22:11.265 ] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 [2024-12-09 23:03:46.534019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.265 [2024-12-09 23:03:46.534269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.265 [2024-12-09 23:03:46.534377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.265 [2024-12-09 23:03:46.536706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.265 [2024-12-09 23:03:46.536912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:11.265 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.266 "name": "Existed_Raid", 00:22:11.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.266 "strip_size_kb": 64, 00:22:11.266 "state": "configuring", 00:22:11.266 "raid_level": "concat", 00:22:11.266 "superblock": false, 00:22:11.266 "num_base_bdevs": 4, 00:22:11.266 "num_base_bdevs_discovered": 3, 00:22:11.266 "num_base_bdevs_operational": 4, 00:22:11.266 "base_bdevs_list": [ 00:22:11.266 { 00:22:11.266 "name": "BaseBdev1", 00:22:11.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.266 "is_configured": false, 00:22:11.266 "data_offset": 0, 00:22:11.266 "data_size": 0 00:22:11.266 }, 00:22:11.266 { 00:22:11.266 "name": "BaseBdev2", 00:22:11.266 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:11.266 "is_configured": true, 00:22:11.266 "data_offset": 0, 00:22:11.266 "data_size": 65536 00:22:11.266 }, 00:22:11.266 { 00:22:11.266 "name": "BaseBdev3", 00:22:11.266 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:11.266 "is_configured": true, 00:22:11.266 "data_offset": 0, 00:22:11.266 "data_size": 65536 00:22:11.266 }, 00:22:11.266 { 00:22:11.266 "name": "BaseBdev4", 00:22:11.266 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:11.266 "is_configured": true, 00:22:11.266 "data_offset": 0, 00:22:11.266 "data_size": 65536 00:22:11.266 } 00:22:11.266 ] 00:22:11.266 }' 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.266 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.837 [2024-12-09 23:03:46.902131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.837 "name": "Existed_Raid", 00:22:11.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.837 "strip_size_kb": 64, 00:22:11.837 "state": "configuring", 00:22:11.837 "raid_level": "concat", 00:22:11.837 "superblock": false, 00:22:11.837 "num_base_bdevs": 4, 00:22:11.837 "num_base_bdevs_discovered": 2, 00:22:11.837 "num_base_bdevs_operational": 4, 00:22:11.837 "base_bdevs_list": [ 00:22:11.837 { 00:22:11.837 "name": "BaseBdev1", 00:22:11.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.837 "is_configured": false, 00:22:11.837 "data_offset": 0, 00:22:11.837 "data_size": 0 00:22:11.837 }, 00:22:11.837 { 00:22:11.837 "name": null, 00:22:11.837 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:11.837 "is_configured": false, 00:22:11.837 "data_offset": 0, 00:22:11.837 "data_size": 65536 00:22:11.837 }, 00:22:11.837 { 00:22:11.837 "name": "BaseBdev3", 00:22:11.837 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:11.837 "is_configured": true, 00:22:11.837 "data_offset": 0, 00:22:11.837 "data_size": 65536 00:22:11.837 }, 00:22:11.837 { 00:22:11.837 "name": "BaseBdev4", 00:22:11.837 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:11.837 "is_configured": true, 00:22:11.837 "data_offset": 0, 00:22:11.837 "data_size": 65536 00:22:11.837 } 00:22:11.837 ] 00:22:11.837 }' 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.837 23:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 BaseBdev1 00:22:12.098 [2024-12-09 23:03:47.326547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.098 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.098 [ 00:22:12.098 { 00:22:12.098 "name": "BaseBdev1", 00:22:12.098 "aliases": [ 00:22:12.098 "7b3a06ea-d3a2-4142-bdbf-665afe916c0c" 00:22:12.098 ], 00:22:12.098 "product_name": "Malloc disk", 00:22:12.098 "block_size": 512, 00:22:12.098 "num_blocks": 65536, 00:22:12.098 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:12.098 "assigned_rate_limits": { 00:22:12.098 "rw_ios_per_sec": 0, 00:22:12.098 "rw_mbytes_per_sec": 0, 00:22:12.098 "r_mbytes_per_sec": 0, 00:22:12.098 "w_mbytes_per_sec": 0 00:22:12.098 }, 00:22:12.098 "claimed": true, 00:22:12.098 "claim_type": "exclusive_write", 00:22:12.099 "zoned": false, 00:22:12.099 "supported_io_types": { 00:22:12.099 "read": true, 00:22:12.099 "write": true, 00:22:12.099 "unmap": true, 00:22:12.099 "flush": true, 00:22:12.099 "reset": true, 00:22:12.099 "nvme_admin": false, 00:22:12.099 "nvme_io": false, 00:22:12.099 "nvme_io_md": false, 00:22:12.099 "write_zeroes": true, 00:22:12.099 "zcopy": true, 00:22:12.099 "get_zone_info": false, 00:22:12.099 "zone_management": false, 00:22:12.099 "zone_append": false, 00:22:12.099 "compare": false, 00:22:12.099 "compare_and_write": false, 00:22:12.099 "abort": true, 00:22:12.099 "seek_hole": false, 00:22:12.099 "seek_data": false, 00:22:12.099 "copy": true, 00:22:12.099 "nvme_iov_md": false 00:22:12.099 }, 00:22:12.099 "memory_domains": [ 00:22:12.099 { 00:22:12.099 "dma_device_id": "system", 00:22:12.099 "dma_device_type": 1 00:22:12.099 }, 00:22:12.099 { 00:22:12.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.099 "dma_device_type": 2 00:22:12.099 } 00:22:12.099 ], 00:22:12.099 "driver_specific": {} 00:22:12.099 } 00:22:12.099 ] 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.099 "name": "Existed_Raid", 00:22:12.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.099 "strip_size_kb": 64, 00:22:12.099 "state": "configuring", 00:22:12.099 "raid_level": "concat", 00:22:12.099 "superblock": false, 00:22:12.099 "num_base_bdevs": 4, 00:22:12.099 "num_base_bdevs_discovered": 3, 00:22:12.099 "num_base_bdevs_operational": 4, 00:22:12.099 "base_bdevs_list": [ 00:22:12.099 { 00:22:12.099 "name": "BaseBdev1", 00:22:12.099 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:12.099 "is_configured": true, 00:22:12.099 "data_offset": 0, 00:22:12.099 "data_size": 65536 00:22:12.099 }, 00:22:12.099 { 00:22:12.099 "name": null, 00:22:12.099 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:12.099 "is_configured": false, 00:22:12.099 "data_offset": 0, 00:22:12.099 "data_size": 65536 00:22:12.099 }, 00:22:12.099 { 00:22:12.099 "name": "BaseBdev3", 00:22:12.099 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:12.099 "is_configured": true, 00:22:12.099 "data_offset": 0, 00:22:12.099 "data_size": 65536 00:22:12.099 }, 00:22:12.099 { 00:22:12.099 "name": "BaseBdev4", 00:22:12.099 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:12.099 "is_configured": true, 00:22:12.099 "data_offset": 0, 00:22:12.099 "data_size": 65536 00:22:12.099 } 00:22:12.099 ] 00:22:12.099 }' 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.099 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.360 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.621 [2024-12-09 23:03:47.722769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.621 "name": "Existed_Raid", 00:22:12.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.621 "strip_size_kb": 64, 00:22:12.621 "state": "configuring", 00:22:12.621 "raid_level": "concat", 00:22:12.621 "superblock": false, 00:22:12.621 "num_base_bdevs": 4, 00:22:12.621 "num_base_bdevs_discovered": 2, 00:22:12.621 "num_base_bdevs_operational": 4, 00:22:12.621 "base_bdevs_list": [ 00:22:12.621 { 00:22:12.621 "name": "BaseBdev1", 00:22:12.621 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:12.621 "is_configured": true, 00:22:12.621 "data_offset": 0, 00:22:12.621 "data_size": 65536 00:22:12.621 }, 00:22:12.621 { 00:22:12.621 "name": null, 00:22:12.621 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:12.621 "is_configured": false, 00:22:12.621 "data_offset": 0, 00:22:12.621 "data_size": 65536 00:22:12.621 }, 00:22:12.621 { 00:22:12.621 "name": null, 00:22:12.621 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:12.621 "is_configured": false, 00:22:12.621 "data_offset": 0, 00:22:12.621 "data_size": 65536 00:22:12.621 }, 00:22:12.621 { 00:22:12.621 "name": "BaseBdev4", 00:22:12.621 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:12.621 "is_configured": true, 00:22:12.621 "data_offset": 0, 00:22:12.621 "data_size": 65536 00:22:12.621 } 00:22:12.621 ] 00:22:12.621 }' 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.621 23:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.883 [2024-12-09 23:03:48.070808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.883 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.884 "name": "Existed_Raid", 00:22:12.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.884 "strip_size_kb": 64, 00:22:12.884 "state": "configuring", 00:22:12.884 "raid_level": "concat", 00:22:12.884 "superblock": false, 00:22:12.884 "num_base_bdevs": 4, 00:22:12.884 "num_base_bdevs_discovered": 3, 00:22:12.884 "num_base_bdevs_operational": 4, 00:22:12.884 "base_bdevs_list": [ 00:22:12.884 { 00:22:12.884 "name": "BaseBdev1", 00:22:12.884 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:12.884 "is_configured": true, 00:22:12.884 "data_offset": 0, 00:22:12.884 "data_size": 65536 00:22:12.884 }, 00:22:12.884 { 00:22:12.884 "name": null, 00:22:12.884 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:12.884 "is_configured": false, 00:22:12.884 "data_offset": 0, 00:22:12.884 "data_size": 65536 00:22:12.884 }, 00:22:12.884 { 00:22:12.884 "name": "BaseBdev3", 00:22:12.884 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:12.884 "is_configured": true, 00:22:12.884 "data_offset": 0, 00:22:12.884 "data_size": 65536 00:22:12.884 }, 00:22:12.884 { 00:22:12.884 "name": "BaseBdev4", 00:22:12.884 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:12.884 "is_configured": true, 00:22:12.884 "data_offset": 0, 00:22:12.884 "data_size": 65536 00:22:12.884 } 00:22:12.884 ] 00:22:12.884 }' 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.884 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.146 [2024-12-09 23:03:48.418942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.146 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.407 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.407 "name": "Existed_Raid", 00:22:13.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.407 "strip_size_kb": 64, 00:22:13.407 "state": "configuring", 00:22:13.407 "raid_level": "concat", 00:22:13.407 "superblock": false, 00:22:13.407 "num_base_bdevs": 4, 00:22:13.407 "num_base_bdevs_discovered": 2, 00:22:13.407 "num_base_bdevs_operational": 4, 00:22:13.407 "base_bdevs_list": [ 00:22:13.407 { 00:22:13.407 "name": null, 00:22:13.407 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:13.407 "is_configured": false, 00:22:13.407 "data_offset": 0, 00:22:13.407 "data_size": 65536 00:22:13.407 }, 00:22:13.407 { 00:22:13.407 "name": null, 00:22:13.407 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:13.407 "is_configured": false, 00:22:13.407 "data_offset": 0, 00:22:13.407 "data_size": 65536 00:22:13.407 }, 00:22:13.407 { 00:22:13.407 "name": "BaseBdev3", 00:22:13.407 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:13.407 "is_configured": true, 00:22:13.407 "data_offset": 0, 00:22:13.407 "data_size": 65536 00:22:13.407 }, 00:22:13.407 { 00:22:13.407 "name": "BaseBdev4", 00:22:13.407 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:13.407 "is_configured": true, 00:22:13.407 "data_offset": 0, 00:22:13.407 "data_size": 65536 00:22:13.407 } 00:22:13.407 ] 00:22:13.407 }' 00:22:13.407 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.407 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:13.668 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.669 [2024-12-09 23:03:48.855848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.669 "name": "Existed_Raid", 00:22:13.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.669 "strip_size_kb": 64, 00:22:13.669 "state": "configuring", 00:22:13.669 "raid_level": "concat", 00:22:13.669 "superblock": false, 00:22:13.669 "num_base_bdevs": 4, 00:22:13.669 "num_base_bdevs_discovered": 3, 00:22:13.669 "num_base_bdevs_operational": 4, 00:22:13.669 "base_bdevs_list": [ 00:22:13.669 { 00:22:13.669 "name": null, 00:22:13.669 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:13.669 "is_configured": false, 00:22:13.669 "data_offset": 0, 00:22:13.669 "data_size": 65536 00:22:13.669 }, 00:22:13.669 { 00:22:13.669 "name": "BaseBdev2", 00:22:13.669 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:13.669 "is_configured": true, 00:22:13.669 "data_offset": 0, 00:22:13.669 "data_size": 65536 00:22:13.669 }, 00:22:13.669 { 00:22:13.669 "name": "BaseBdev3", 00:22:13.669 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:13.669 "is_configured": true, 00:22:13.669 "data_offset": 0, 00:22:13.669 "data_size": 65536 00:22:13.669 }, 00:22:13.669 { 00:22:13.669 "name": "BaseBdev4", 00:22:13.669 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:13.669 "is_configured": true, 00:22:13.669 "data_offset": 0, 00:22:13.669 "data_size": 65536 00:22:13.669 } 00:22:13.669 ] 00:22:13.669 }' 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.669 23:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7b3a06ea-d3a2-4142-bdbf-665afe916c0c 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.955 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.955 [2024-12-09 23:03:49.252215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.955 [2024-12-09 23:03:49.252288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:13.956 [2024-12-09 23:03:49.252296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:13.956 [2024-12-09 23:03:49.252588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:13.956 [2024-12-09 23:03:49.252751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:13.956 [2024-12-09 23:03:49.252763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:13.956 [2024-12-09 23:03:49.253038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.956 NewBaseBdev 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.956 [ 00:22:13.956 { 00:22:13.956 "name": "NewBaseBdev", 00:22:13.956 "aliases": [ 00:22:13.956 "7b3a06ea-d3a2-4142-bdbf-665afe916c0c" 00:22:13.956 ], 00:22:13.956 "product_name": "Malloc disk", 00:22:13.956 "block_size": 512, 00:22:13.956 "num_blocks": 65536, 00:22:13.956 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:13.956 "assigned_rate_limits": { 00:22:13.956 "rw_ios_per_sec": 0, 00:22:13.956 "rw_mbytes_per_sec": 0, 00:22:13.956 "r_mbytes_per_sec": 0, 00:22:13.956 "w_mbytes_per_sec": 0 00:22:13.956 }, 00:22:13.956 "claimed": true, 00:22:13.956 "claim_type": "exclusive_write", 00:22:13.956 "zoned": false, 00:22:13.956 "supported_io_types": { 00:22:13.956 "read": true, 00:22:13.956 "write": true, 00:22:13.956 "unmap": true, 00:22:13.956 "flush": true, 00:22:13.956 "reset": true, 00:22:13.956 "nvme_admin": false, 00:22:13.956 "nvme_io": false, 00:22:13.956 "nvme_io_md": false, 00:22:13.956 "write_zeroes": true, 00:22:13.956 "zcopy": true, 00:22:13.956 "get_zone_info": false, 00:22:13.956 "zone_management": false, 00:22:13.956 "zone_append": false, 00:22:13.956 "compare": false, 00:22:13.956 "compare_and_write": false, 00:22:13.956 "abort": true, 00:22:13.956 "seek_hole": false, 00:22:13.956 "seek_data": false, 00:22:13.956 "copy": true, 00:22:13.956 "nvme_iov_md": false 00:22:13.956 }, 00:22:13.956 "memory_domains": [ 00:22:13.956 { 00:22:13.956 "dma_device_id": "system", 00:22:13.956 "dma_device_type": 1 00:22:13.956 }, 00:22:13.956 { 00:22:13.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.956 "dma_device_type": 2 00:22:13.956 } 00:22:13.956 ], 00:22:13.956 "driver_specific": {} 00:22:13.956 } 00:22:13.956 ] 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.956 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.216 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.216 "name": "Existed_Raid", 00:22:14.216 "uuid": "ba934403-ef47-4858-968c-c950debc15e7", 00:22:14.216 "strip_size_kb": 64, 00:22:14.216 "state": "online", 00:22:14.216 "raid_level": "concat", 00:22:14.216 "superblock": false, 00:22:14.216 "num_base_bdevs": 4, 00:22:14.216 "num_base_bdevs_discovered": 4, 00:22:14.216 "num_base_bdevs_operational": 4, 00:22:14.216 "base_bdevs_list": [ 00:22:14.216 { 00:22:14.216 "name": "NewBaseBdev", 00:22:14.216 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:14.216 "is_configured": true, 00:22:14.216 "data_offset": 0, 00:22:14.216 "data_size": 65536 00:22:14.216 }, 00:22:14.216 { 00:22:14.216 "name": "BaseBdev2", 00:22:14.216 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:14.216 "is_configured": true, 00:22:14.216 "data_offset": 0, 00:22:14.216 "data_size": 65536 00:22:14.216 }, 00:22:14.216 { 00:22:14.216 "name": "BaseBdev3", 00:22:14.216 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:14.216 "is_configured": true, 00:22:14.216 "data_offset": 0, 00:22:14.216 "data_size": 65536 00:22:14.216 }, 00:22:14.216 { 00:22:14.216 "name": "BaseBdev4", 00:22:14.216 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:14.216 "is_configured": true, 00:22:14.216 "data_offset": 0, 00:22:14.216 "data_size": 65536 00:22:14.216 } 00:22:14.216 ] 00:22:14.216 }' 00:22:14.216 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.216 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.479 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.480 [2024-12-09 23:03:49.588784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.480 "name": "Existed_Raid", 00:22:14.480 "aliases": [ 00:22:14.480 "ba934403-ef47-4858-968c-c950debc15e7" 00:22:14.480 ], 00:22:14.480 "product_name": "Raid Volume", 00:22:14.480 "block_size": 512, 00:22:14.480 "num_blocks": 262144, 00:22:14.480 "uuid": "ba934403-ef47-4858-968c-c950debc15e7", 00:22:14.480 "assigned_rate_limits": { 00:22:14.480 "rw_ios_per_sec": 0, 00:22:14.480 "rw_mbytes_per_sec": 0, 00:22:14.480 "r_mbytes_per_sec": 0, 00:22:14.480 "w_mbytes_per_sec": 0 00:22:14.480 }, 00:22:14.480 "claimed": false, 00:22:14.480 "zoned": false, 00:22:14.480 "supported_io_types": { 00:22:14.480 "read": true, 00:22:14.480 "write": true, 00:22:14.480 "unmap": true, 00:22:14.480 "flush": true, 00:22:14.480 "reset": true, 00:22:14.480 "nvme_admin": false, 00:22:14.480 "nvme_io": false, 00:22:14.480 "nvme_io_md": false, 00:22:14.480 "write_zeroes": true, 00:22:14.480 "zcopy": false, 00:22:14.480 "get_zone_info": false, 00:22:14.480 "zone_management": false, 00:22:14.480 "zone_append": false, 00:22:14.480 "compare": false, 00:22:14.480 "compare_and_write": false, 00:22:14.480 "abort": false, 00:22:14.480 "seek_hole": false, 00:22:14.480 "seek_data": false, 00:22:14.480 "copy": false, 00:22:14.480 "nvme_iov_md": false 00:22:14.480 }, 00:22:14.480 "memory_domains": [ 00:22:14.480 { 00:22:14.480 "dma_device_id": "system", 00:22:14.480 "dma_device_type": 1 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.480 "dma_device_type": 2 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "system", 00:22:14.480 "dma_device_type": 1 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.480 "dma_device_type": 2 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "system", 00:22:14.480 "dma_device_type": 1 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.480 "dma_device_type": 2 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "system", 00:22:14.480 "dma_device_type": 1 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.480 "dma_device_type": 2 00:22:14.480 } 00:22:14.480 ], 00:22:14.480 "driver_specific": { 00:22:14.480 "raid": { 00:22:14.480 "uuid": "ba934403-ef47-4858-968c-c950debc15e7", 00:22:14.480 "strip_size_kb": 64, 00:22:14.480 "state": "online", 00:22:14.480 "raid_level": "concat", 00:22:14.480 "superblock": false, 00:22:14.480 "num_base_bdevs": 4, 00:22:14.480 "num_base_bdevs_discovered": 4, 00:22:14.480 "num_base_bdevs_operational": 4, 00:22:14.480 "base_bdevs_list": [ 00:22:14.480 { 00:22:14.480 "name": "NewBaseBdev", 00:22:14.480 "uuid": "7b3a06ea-d3a2-4142-bdbf-665afe916c0c", 00:22:14.480 "is_configured": true, 00:22:14.480 "data_offset": 0, 00:22:14.480 "data_size": 65536 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "name": "BaseBdev2", 00:22:14.480 "uuid": "866c0b09-3cdc-42ba-b60c-8475e05bb722", 00:22:14.480 "is_configured": true, 00:22:14.480 "data_offset": 0, 00:22:14.480 "data_size": 65536 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "name": "BaseBdev3", 00:22:14.480 "uuid": "622935b8-eeea-45ba-ab58-184a0076d263", 00:22:14.480 "is_configured": true, 00:22:14.480 "data_offset": 0, 00:22:14.480 "data_size": 65536 00:22:14.480 }, 00:22:14.480 { 00:22:14.480 "name": "BaseBdev4", 00:22:14.480 "uuid": "e6fa920c-8bc1-4540-94d2-9ae9c8864fb3", 00:22:14.480 "is_configured": true, 00:22:14.480 "data_offset": 0, 00:22:14.480 "data_size": 65536 00:22:14.480 } 00:22:14.480 ] 00:22:14.480 } 00:22:14.480 } 00:22:14.480 }' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:14.480 BaseBdev2 00:22:14.480 BaseBdev3 00:22:14.480 BaseBdev4' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.480 [2024-12-09 23:03:49.816419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.480 [2024-12-09 23:03:49.816462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.480 [2024-12-09 23:03:49.816554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.480 [2024-12-09 23:03:49.816635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.480 [2024-12-09 23:03:49.816647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69486 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69486 ']' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69486 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.480 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69486 00:22:14.743 killing process with pid 69486 00:22:14.743 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.743 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.743 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69486' 00:22:14.743 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69486 00:22:14.743 23:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69486 00:22:14.743 [2024-12-09 23:03:49.850733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.004 [2024-12-09 23:03:50.127240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.946 ************************************ 00:22:15.946 END TEST raid_state_function_test 00:22:15.946 ************************************ 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:15.946 00:22:15.946 real 0m8.963s 00:22:15.946 user 0m13.842s 00:22:15.946 sys 0m1.738s 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.946 23:03:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:22:15.946 23:03:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:15.946 23:03:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.946 23:03:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.946 ************************************ 00:22:15.946 START TEST raid_state_function_test_sb 00:22:15.946 ************************************ 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.946 23:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:15.946 Process raid pid: 70135 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70135 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70135' 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70135 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70135 ']' 00:22:15.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.946 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.947 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:15.947 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.947 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.947 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.947 [2024-12-09 23:03:51.079647] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:15.947 [2024-12-09 23:03:51.079816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.947 [2024-12-09 23:03:51.245026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.208 [2024-12-09 23:03:51.376516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.208 [2024-12-09 23:03:51.536222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.208 [2024-12-09 23:03:51.536284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.824 [2024-12-09 23:03:51.959158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:16.824 [2024-12-09 23:03:51.959242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:16.824 [2024-12-09 23:03:51.959259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:16.824 [2024-12-09 23:03:51.959271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:16.824 [2024-12-09 23:03:51.959278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:16.824 [2024-12-09 23:03:51.959288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:16.824 [2024-12-09 23:03:51.959294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:16.824 [2024-12-09 23:03:51.959303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.824 "name": "Existed_Raid", 00:22:16.824 "uuid": "6acf9a31-b241-49ba-8cb5-6de8742376c7", 00:22:16.824 "strip_size_kb": 64, 00:22:16.824 "state": "configuring", 00:22:16.824 "raid_level": "concat", 00:22:16.824 "superblock": true, 00:22:16.824 "num_base_bdevs": 4, 00:22:16.824 "num_base_bdevs_discovered": 0, 00:22:16.824 "num_base_bdevs_operational": 4, 00:22:16.824 "base_bdevs_list": [ 00:22:16.824 { 00:22:16.824 "name": "BaseBdev1", 00:22:16.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.824 "is_configured": false, 00:22:16.824 "data_offset": 0, 00:22:16.824 "data_size": 0 00:22:16.824 }, 00:22:16.824 { 00:22:16.824 "name": "BaseBdev2", 00:22:16.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.824 "is_configured": false, 00:22:16.824 "data_offset": 0, 00:22:16.824 "data_size": 0 00:22:16.824 }, 00:22:16.824 { 00:22:16.824 "name": "BaseBdev3", 00:22:16.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.824 "is_configured": false, 00:22:16.824 "data_offset": 0, 00:22:16.824 "data_size": 0 00:22:16.824 }, 00:22:16.824 { 00:22:16.824 "name": "BaseBdev4", 00:22:16.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.824 "is_configured": false, 00:22:16.824 "data_offset": 0, 00:22:16.824 "data_size": 0 00:22:16.824 } 00:22:16.824 ] 00:22:16.824 }' 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.824 23:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.086 [2024-12-09 23:03:52.275147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.086 [2024-12-09 23:03:52.275206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.086 [2024-12-09 23:03:52.287207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:17.086 [2024-12-09 23:03:52.287269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:17.086 [2024-12-09 23:03:52.287280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.086 [2024-12-09 23:03:52.287291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.086 [2024-12-09 23:03:52.287297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:17.086 [2024-12-09 23:03:52.287307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:17.086 [2024-12-09 23:03:52.287313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:17.086 [2024-12-09 23:03:52.287323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.086 [2024-12-09 23:03:52.324941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.086 BaseBdev1 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.086 [ 00:22:17.086 { 00:22:17.086 "name": "BaseBdev1", 00:22:17.086 "aliases": [ 00:22:17.086 "3b68177a-d8f3-451b-8955-4cf3309a2d50" 00:22:17.086 ], 00:22:17.086 "product_name": "Malloc disk", 00:22:17.086 "block_size": 512, 00:22:17.086 "num_blocks": 65536, 00:22:17.086 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:17.086 "assigned_rate_limits": { 00:22:17.086 "rw_ios_per_sec": 0, 00:22:17.086 "rw_mbytes_per_sec": 0, 00:22:17.086 "r_mbytes_per_sec": 0, 00:22:17.086 "w_mbytes_per_sec": 0 00:22:17.086 }, 00:22:17.086 "claimed": true, 00:22:17.086 "claim_type": "exclusive_write", 00:22:17.086 "zoned": false, 00:22:17.086 "supported_io_types": { 00:22:17.086 "read": true, 00:22:17.086 "write": true, 00:22:17.086 "unmap": true, 00:22:17.086 "flush": true, 00:22:17.086 "reset": true, 00:22:17.086 "nvme_admin": false, 00:22:17.086 "nvme_io": false, 00:22:17.086 "nvme_io_md": false, 00:22:17.086 "write_zeroes": true, 00:22:17.086 "zcopy": true, 00:22:17.086 "get_zone_info": false, 00:22:17.086 "zone_management": false, 00:22:17.086 "zone_append": false, 00:22:17.086 "compare": false, 00:22:17.086 "compare_and_write": false, 00:22:17.086 "abort": true, 00:22:17.086 "seek_hole": false, 00:22:17.086 "seek_data": false, 00:22:17.086 "copy": true, 00:22:17.086 "nvme_iov_md": false 00:22:17.086 }, 00:22:17.086 "memory_domains": [ 00:22:17.086 { 00:22:17.086 "dma_device_id": "system", 00:22:17.086 "dma_device_type": 1 00:22:17.086 }, 00:22:17.086 { 00:22:17.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.086 "dma_device_type": 2 00:22:17.086 } 00:22:17.086 ], 00:22:17.086 "driver_specific": {} 00:22:17.086 } 00:22:17.086 ] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.086 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.087 "name": "Existed_Raid", 00:22:17.087 "uuid": "8d24248c-c5bb-4d40-bdd0-008c1ee7c0e8", 00:22:17.087 "strip_size_kb": 64, 00:22:17.087 "state": "configuring", 00:22:17.087 "raid_level": "concat", 00:22:17.087 "superblock": true, 00:22:17.087 "num_base_bdevs": 4, 00:22:17.087 "num_base_bdevs_discovered": 1, 00:22:17.087 "num_base_bdevs_operational": 4, 00:22:17.087 "base_bdevs_list": [ 00:22:17.087 { 00:22:17.087 "name": "BaseBdev1", 00:22:17.087 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:17.087 "is_configured": true, 00:22:17.087 "data_offset": 2048, 00:22:17.087 "data_size": 63488 00:22:17.087 }, 00:22:17.087 { 00:22:17.087 "name": "BaseBdev2", 00:22:17.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.087 "is_configured": false, 00:22:17.087 "data_offset": 0, 00:22:17.087 "data_size": 0 00:22:17.087 }, 00:22:17.087 { 00:22:17.087 "name": "BaseBdev3", 00:22:17.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.087 "is_configured": false, 00:22:17.087 "data_offset": 0, 00:22:17.087 "data_size": 0 00:22:17.087 }, 00:22:17.087 { 00:22:17.087 "name": "BaseBdev4", 00:22:17.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.087 "is_configured": false, 00:22:17.087 "data_offset": 0, 00:22:17.087 "data_size": 0 00:22:17.087 } 00:22:17.087 ] 00:22:17.087 }' 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.087 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.348 [2024-12-09 23:03:52.665065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.348 [2024-12-09 23:03:52.665149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.348 [2024-12-09 23:03:52.677190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.348 [2024-12-09 23:03:52.679318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.348 [2024-12-09 23:03:52.679379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.348 [2024-12-09 23:03:52.679390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:17.348 [2024-12-09 23:03:52.679403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:17.348 [2024-12-09 23:03:52.679411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:17.348 [2024-12-09 23:03:52.679421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.348 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.610 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.610 "name": "Existed_Raid", 00:22:17.610 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:17.610 "strip_size_kb": 64, 00:22:17.610 "state": "configuring", 00:22:17.610 "raid_level": "concat", 00:22:17.610 "superblock": true, 00:22:17.610 "num_base_bdevs": 4, 00:22:17.610 "num_base_bdevs_discovered": 1, 00:22:17.610 "num_base_bdevs_operational": 4, 00:22:17.610 "base_bdevs_list": [ 00:22:17.610 { 00:22:17.610 "name": "BaseBdev1", 00:22:17.610 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:17.610 "is_configured": true, 00:22:17.610 "data_offset": 2048, 00:22:17.610 "data_size": 63488 00:22:17.610 }, 00:22:17.610 { 00:22:17.610 "name": "BaseBdev2", 00:22:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.610 "is_configured": false, 00:22:17.610 "data_offset": 0, 00:22:17.610 "data_size": 0 00:22:17.610 }, 00:22:17.610 { 00:22:17.610 "name": "BaseBdev3", 00:22:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.610 "is_configured": false, 00:22:17.610 "data_offset": 0, 00:22:17.610 "data_size": 0 00:22:17.610 }, 00:22:17.610 { 00:22:17.610 "name": "BaseBdev4", 00:22:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.610 "is_configured": false, 00:22:17.610 "data_offset": 0, 00:22:17.610 "data_size": 0 00:22:17.610 } 00:22:17.610 ] 00:22:17.610 }' 00:22:17.610 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.610 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.871 23:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:17.871 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.871 23:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.871 [2024-12-09 23:03:53.031795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.871 BaseBdev2 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.871 [ 00:22:17.871 { 00:22:17.871 "name": "BaseBdev2", 00:22:17.871 "aliases": [ 00:22:17.871 "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8" 00:22:17.871 ], 00:22:17.871 "product_name": "Malloc disk", 00:22:17.871 "block_size": 512, 00:22:17.871 "num_blocks": 65536, 00:22:17.871 "uuid": "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8", 00:22:17.871 "assigned_rate_limits": { 00:22:17.871 "rw_ios_per_sec": 0, 00:22:17.871 "rw_mbytes_per_sec": 0, 00:22:17.871 "r_mbytes_per_sec": 0, 00:22:17.871 "w_mbytes_per_sec": 0 00:22:17.871 }, 00:22:17.871 "claimed": true, 00:22:17.871 "claim_type": "exclusive_write", 00:22:17.871 "zoned": false, 00:22:17.871 "supported_io_types": { 00:22:17.871 "read": true, 00:22:17.871 "write": true, 00:22:17.871 "unmap": true, 00:22:17.871 "flush": true, 00:22:17.871 "reset": true, 00:22:17.871 "nvme_admin": false, 00:22:17.871 "nvme_io": false, 00:22:17.871 "nvme_io_md": false, 00:22:17.871 "write_zeroes": true, 00:22:17.871 "zcopy": true, 00:22:17.871 "get_zone_info": false, 00:22:17.871 "zone_management": false, 00:22:17.871 "zone_append": false, 00:22:17.871 "compare": false, 00:22:17.871 "compare_and_write": false, 00:22:17.871 "abort": true, 00:22:17.871 "seek_hole": false, 00:22:17.871 "seek_data": false, 00:22:17.871 "copy": true, 00:22:17.871 "nvme_iov_md": false 00:22:17.871 }, 00:22:17.871 "memory_domains": [ 00:22:17.871 { 00:22:17.871 "dma_device_id": "system", 00:22:17.871 "dma_device_type": 1 00:22:17.871 }, 00:22:17.871 { 00:22:17.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.871 "dma_device_type": 2 00:22:17.871 } 00:22:17.871 ], 00:22:17.871 "driver_specific": {} 00:22:17.871 } 00:22:17.871 ] 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.871 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.872 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.872 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.872 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.872 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.872 "name": "Existed_Raid", 00:22:17.872 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:17.872 "strip_size_kb": 64, 00:22:17.872 "state": "configuring", 00:22:17.872 "raid_level": "concat", 00:22:17.872 "superblock": true, 00:22:17.872 "num_base_bdevs": 4, 00:22:17.872 "num_base_bdevs_discovered": 2, 00:22:17.872 "num_base_bdevs_operational": 4, 00:22:17.872 "base_bdevs_list": [ 00:22:17.872 { 00:22:17.872 "name": "BaseBdev1", 00:22:17.872 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:17.872 "is_configured": true, 00:22:17.872 "data_offset": 2048, 00:22:17.872 "data_size": 63488 00:22:17.872 }, 00:22:17.872 { 00:22:17.872 "name": "BaseBdev2", 00:22:17.872 "uuid": "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8", 00:22:17.872 "is_configured": true, 00:22:17.872 "data_offset": 2048, 00:22:17.872 "data_size": 63488 00:22:17.872 }, 00:22:17.872 { 00:22:17.872 "name": "BaseBdev3", 00:22:17.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.872 "is_configured": false, 00:22:17.872 "data_offset": 0, 00:22:17.872 "data_size": 0 00:22:17.872 }, 00:22:17.872 { 00:22:17.872 "name": "BaseBdev4", 00:22:17.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.872 "is_configured": false, 00:22:17.872 "data_offset": 0, 00:22:17.872 "data_size": 0 00:22:17.872 } 00:22:17.872 ] 00:22:17.872 }' 00:22:17.872 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.872 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.133 [2024-12-09 23:03:53.435796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:18.133 BaseBdev3 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.133 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.133 [ 00:22:18.133 { 00:22:18.133 "name": "BaseBdev3", 00:22:18.133 "aliases": [ 00:22:18.133 "a4dab642-4d50-468b-ab85-b5bfda9612b1" 00:22:18.133 ], 00:22:18.133 "product_name": "Malloc disk", 00:22:18.133 "block_size": 512, 00:22:18.133 "num_blocks": 65536, 00:22:18.133 "uuid": "a4dab642-4d50-468b-ab85-b5bfda9612b1", 00:22:18.133 "assigned_rate_limits": { 00:22:18.133 "rw_ios_per_sec": 0, 00:22:18.133 "rw_mbytes_per_sec": 0, 00:22:18.133 "r_mbytes_per_sec": 0, 00:22:18.133 "w_mbytes_per_sec": 0 00:22:18.133 }, 00:22:18.133 "claimed": true, 00:22:18.133 "claim_type": "exclusive_write", 00:22:18.133 "zoned": false, 00:22:18.133 "supported_io_types": { 00:22:18.133 "read": true, 00:22:18.133 "write": true, 00:22:18.133 "unmap": true, 00:22:18.133 "flush": true, 00:22:18.133 "reset": true, 00:22:18.133 "nvme_admin": false, 00:22:18.133 "nvme_io": false, 00:22:18.133 "nvme_io_md": false, 00:22:18.133 "write_zeroes": true, 00:22:18.133 "zcopy": true, 00:22:18.134 "get_zone_info": false, 00:22:18.134 "zone_management": false, 00:22:18.134 "zone_append": false, 00:22:18.134 "compare": false, 00:22:18.134 "compare_and_write": false, 00:22:18.134 "abort": true, 00:22:18.134 "seek_hole": false, 00:22:18.134 "seek_data": false, 00:22:18.134 "copy": true, 00:22:18.134 "nvme_iov_md": false 00:22:18.134 }, 00:22:18.134 "memory_domains": [ 00:22:18.134 { 00:22:18.134 "dma_device_id": "system", 00:22:18.134 "dma_device_type": 1 00:22:18.134 }, 00:22:18.134 { 00:22:18.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.134 "dma_device_type": 2 00:22:18.134 } 00:22:18.134 ], 00:22:18.134 "driver_specific": {} 00:22:18.134 } 00:22:18.134 ] 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.134 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.396 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.396 "name": "Existed_Raid", 00:22:18.396 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:18.396 "strip_size_kb": 64, 00:22:18.396 "state": "configuring", 00:22:18.396 "raid_level": "concat", 00:22:18.396 "superblock": true, 00:22:18.396 "num_base_bdevs": 4, 00:22:18.396 "num_base_bdevs_discovered": 3, 00:22:18.396 "num_base_bdevs_operational": 4, 00:22:18.396 "base_bdevs_list": [ 00:22:18.396 { 00:22:18.396 "name": "BaseBdev1", 00:22:18.396 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:18.396 "is_configured": true, 00:22:18.396 "data_offset": 2048, 00:22:18.396 "data_size": 63488 00:22:18.396 }, 00:22:18.396 { 00:22:18.396 "name": "BaseBdev2", 00:22:18.396 "uuid": "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8", 00:22:18.396 "is_configured": true, 00:22:18.396 "data_offset": 2048, 00:22:18.396 "data_size": 63488 00:22:18.396 }, 00:22:18.396 { 00:22:18.396 "name": "BaseBdev3", 00:22:18.396 "uuid": "a4dab642-4d50-468b-ab85-b5bfda9612b1", 00:22:18.396 "is_configured": true, 00:22:18.396 "data_offset": 2048, 00:22:18.396 "data_size": 63488 00:22:18.396 }, 00:22:18.396 { 00:22:18.396 "name": "BaseBdev4", 00:22:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.396 "is_configured": false, 00:22:18.396 "data_offset": 0, 00:22:18.396 "data_size": 0 00:22:18.396 } 00:22:18.396 ] 00:22:18.396 }' 00:22:18.396 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.396 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.659 [2024-12-09 23:03:53.816350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:18.659 [2024-12-09 23:03:53.816708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:18.659 [2024-12-09 23:03:53.816728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:18.659 [2024-12-09 23:03:53.817050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:18.659 BaseBdev4 00:22:18.659 [2024-12-09 23:03:53.817244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:18.659 [2024-12-09 23:03:53.817266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:18.659 [2024-12-09 23:03:53.817424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.659 [ 00:22:18.659 { 00:22:18.659 "name": "BaseBdev4", 00:22:18.659 "aliases": [ 00:22:18.659 "a076c943-8db8-43cb-8dd2-a514945c4824" 00:22:18.659 ], 00:22:18.659 "product_name": "Malloc disk", 00:22:18.659 "block_size": 512, 00:22:18.659 "num_blocks": 65536, 00:22:18.659 "uuid": "a076c943-8db8-43cb-8dd2-a514945c4824", 00:22:18.659 "assigned_rate_limits": { 00:22:18.659 "rw_ios_per_sec": 0, 00:22:18.659 "rw_mbytes_per_sec": 0, 00:22:18.659 "r_mbytes_per_sec": 0, 00:22:18.659 "w_mbytes_per_sec": 0 00:22:18.659 }, 00:22:18.659 "claimed": true, 00:22:18.659 "claim_type": "exclusive_write", 00:22:18.659 "zoned": false, 00:22:18.659 "supported_io_types": { 00:22:18.659 "read": true, 00:22:18.659 "write": true, 00:22:18.659 "unmap": true, 00:22:18.659 "flush": true, 00:22:18.659 "reset": true, 00:22:18.659 "nvme_admin": false, 00:22:18.659 "nvme_io": false, 00:22:18.659 "nvme_io_md": false, 00:22:18.659 "write_zeroes": true, 00:22:18.659 "zcopy": true, 00:22:18.659 "get_zone_info": false, 00:22:18.659 "zone_management": false, 00:22:18.659 "zone_append": false, 00:22:18.659 "compare": false, 00:22:18.659 "compare_and_write": false, 00:22:18.659 "abort": true, 00:22:18.659 "seek_hole": false, 00:22:18.659 "seek_data": false, 00:22:18.659 "copy": true, 00:22:18.659 "nvme_iov_md": false 00:22:18.659 }, 00:22:18.659 "memory_domains": [ 00:22:18.659 { 00:22:18.659 "dma_device_id": "system", 00:22:18.659 "dma_device_type": 1 00:22:18.659 }, 00:22:18.659 { 00:22:18.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.659 "dma_device_type": 2 00:22:18.659 } 00:22:18.659 ], 00:22:18.659 "driver_specific": {} 00:22:18.659 } 00:22:18.659 ] 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.659 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.660 "name": "Existed_Raid", 00:22:18.660 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:18.660 "strip_size_kb": 64, 00:22:18.660 "state": "online", 00:22:18.660 "raid_level": "concat", 00:22:18.660 "superblock": true, 00:22:18.660 "num_base_bdevs": 4, 00:22:18.660 "num_base_bdevs_discovered": 4, 00:22:18.660 "num_base_bdevs_operational": 4, 00:22:18.660 "base_bdevs_list": [ 00:22:18.660 { 00:22:18.660 "name": "BaseBdev1", 00:22:18.660 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:18.660 "is_configured": true, 00:22:18.660 "data_offset": 2048, 00:22:18.660 "data_size": 63488 00:22:18.660 }, 00:22:18.660 { 00:22:18.660 "name": "BaseBdev2", 00:22:18.660 "uuid": "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8", 00:22:18.660 "is_configured": true, 00:22:18.660 "data_offset": 2048, 00:22:18.660 "data_size": 63488 00:22:18.660 }, 00:22:18.660 { 00:22:18.660 "name": "BaseBdev3", 00:22:18.660 "uuid": "a4dab642-4d50-468b-ab85-b5bfda9612b1", 00:22:18.660 "is_configured": true, 00:22:18.660 "data_offset": 2048, 00:22:18.660 "data_size": 63488 00:22:18.660 }, 00:22:18.660 { 00:22:18.660 "name": "BaseBdev4", 00:22:18.660 "uuid": "a076c943-8db8-43cb-8dd2-a514945c4824", 00:22:18.660 "is_configured": true, 00:22:18.660 "data_offset": 2048, 00:22:18.660 "data_size": 63488 00:22:18.660 } 00:22:18.660 ] 00:22:18.660 }' 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.660 23:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.921 [2024-12-09 23:03:54.192974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.921 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:18.921 "name": "Existed_Raid", 00:22:18.921 "aliases": [ 00:22:18.921 "711eec91-818a-4b3b-95aa-c86b3ca9ce28" 00:22:18.921 ], 00:22:18.921 "product_name": "Raid Volume", 00:22:18.921 "block_size": 512, 00:22:18.921 "num_blocks": 253952, 00:22:18.921 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:18.921 "assigned_rate_limits": { 00:22:18.921 "rw_ios_per_sec": 0, 00:22:18.921 "rw_mbytes_per_sec": 0, 00:22:18.921 "r_mbytes_per_sec": 0, 00:22:18.921 "w_mbytes_per_sec": 0 00:22:18.921 }, 00:22:18.921 "claimed": false, 00:22:18.921 "zoned": false, 00:22:18.921 "supported_io_types": { 00:22:18.921 "read": true, 00:22:18.921 "write": true, 00:22:18.921 "unmap": true, 00:22:18.921 "flush": true, 00:22:18.921 "reset": true, 00:22:18.921 "nvme_admin": false, 00:22:18.921 "nvme_io": false, 00:22:18.921 "nvme_io_md": false, 00:22:18.921 "write_zeroes": true, 00:22:18.921 "zcopy": false, 00:22:18.921 "get_zone_info": false, 00:22:18.921 "zone_management": false, 00:22:18.921 "zone_append": false, 00:22:18.921 "compare": false, 00:22:18.921 "compare_and_write": false, 00:22:18.921 "abort": false, 00:22:18.921 "seek_hole": false, 00:22:18.921 "seek_data": false, 00:22:18.921 "copy": false, 00:22:18.921 "nvme_iov_md": false 00:22:18.921 }, 00:22:18.921 "memory_domains": [ 00:22:18.921 { 00:22:18.921 "dma_device_id": "system", 00:22:18.921 "dma_device_type": 1 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.921 "dma_device_type": 2 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "system", 00:22:18.921 "dma_device_type": 1 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.921 "dma_device_type": 2 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "system", 00:22:18.921 "dma_device_type": 1 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.921 "dma_device_type": 2 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "system", 00:22:18.921 "dma_device_type": 1 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.921 "dma_device_type": 2 00:22:18.921 } 00:22:18.921 ], 00:22:18.921 "driver_specific": { 00:22:18.921 "raid": { 00:22:18.921 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:18.921 "strip_size_kb": 64, 00:22:18.921 "state": "online", 00:22:18.921 "raid_level": "concat", 00:22:18.921 "superblock": true, 00:22:18.921 "num_base_bdevs": 4, 00:22:18.921 "num_base_bdevs_discovered": 4, 00:22:18.921 "num_base_bdevs_operational": 4, 00:22:18.921 "base_bdevs_list": [ 00:22:18.921 { 00:22:18.921 "name": "BaseBdev1", 00:22:18.921 "uuid": "3b68177a-d8f3-451b-8955-4cf3309a2d50", 00:22:18.921 "is_configured": true, 00:22:18.921 "data_offset": 2048, 00:22:18.921 "data_size": 63488 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "name": "BaseBdev2", 00:22:18.921 "uuid": "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8", 00:22:18.921 "is_configured": true, 00:22:18.921 "data_offset": 2048, 00:22:18.921 "data_size": 63488 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "name": "BaseBdev3", 00:22:18.921 "uuid": "a4dab642-4d50-468b-ab85-b5bfda9612b1", 00:22:18.921 "is_configured": true, 00:22:18.921 "data_offset": 2048, 00:22:18.921 "data_size": 63488 00:22:18.921 }, 00:22:18.921 { 00:22:18.921 "name": "BaseBdev4", 00:22:18.921 "uuid": "a076c943-8db8-43cb-8dd2-a514945c4824", 00:22:18.921 "is_configured": true, 00:22:18.921 "data_offset": 2048, 00:22:18.921 "data_size": 63488 00:22:18.922 } 00:22:18.922 ] 00:22:18.922 } 00:22:18.922 } 00:22:18.922 }' 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:18.922 BaseBdev2 00:22:18.922 BaseBdev3 00:22:18.922 BaseBdev4' 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:18.922 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.182 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.183 [2024-12-09 23:03:54.428661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:19.183 [2024-12-09 23:03:54.428743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:19.183 [2024-12-09 23:03:54.428802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.183 "name": "Existed_Raid", 00:22:19.183 "uuid": "711eec91-818a-4b3b-95aa-c86b3ca9ce28", 00:22:19.183 "strip_size_kb": 64, 00:22:19.183 "state": "offline", 00:22:19.183 "raid_level": "concat", 00:22:19.183 "superblock": true, 00:22:19.183 "num_base_bdevs": 4, 00:22:19.183 "num_base_bdevs_discovered": 3, 00:22:19.183 "num_base_bdevs_operational": 3, 00:22:19.183 "base_bdevs_list": [ 00:22:19.183 { 00:22:19.183 "name": null, 00:22:19.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.183 "is_configured": false, 00:22:19.183 "data_offset": 0, 00:22:19.183 "data_size": 63488 00:22:19.183 }, 00:22:19.183 { 00:22:19.183 "name": "BaseBdev2", 00:22:19.183 "uuid": "2a17c5b6-38a3-4ff3-a151-dcb3869bf4c8", 00:22:19.183 "is_configured": true, 00:22:19.183 "data_offset": 2048, 00:22:19.183 "data_size": 63488 00:22:19.183 }, 00:22:19.183 { 00:22:19.183 "name": "BaseBdev3", 00:22:19.183 "uuid": "a4dab642-4d50-468b-ab85-b5bfda9612b1", 00:22:19.183 "is_configured": true, 00:22:19.183 "data_offset": 2048, 00:22:19.183 "data_size": 63488 00:22:19.183 }, 00:22:19.183 { 00:22:19.183 "name": "BaseBdev4", 00:22:19.183 "uuid": "a076c943-8db8-43cb-8dd2-a514945c4824", 00:22:19.183 "is_configured": true, 00:22:19.183 "data_offset": 2048, 00:22:19.183 "data_size": 63488 00:22:19.183 } 00:22:19.183 ] 00:22:19.183 }' 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.183 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 [2024-12-09 23:03:54.872928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:19.830 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.831 23:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.831 [2024-12-09 23:03:54.981300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.831 [2024-12-09 23:03:55.097060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:19.831 [2024-12-09 23:03:55.097149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.831 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.093 BaseBdev2 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.093 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 [ 00:22:20.094 { 00:22:20.094 "name": "BaseBdev2", 00:22:20.094 "aliases": [ 00:22:20.094 "8bdf37f2-bcc7-4011-946c-83cc485f71f6" 00:22:20.094 ], 00:22:20.094 "product_name": "Malloc disk", 00:22:20.094 "block_size": 512, 00:22:20.094 "num_blocks": 65536, 00:22:20.094 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:20.094 "assigned_rate_limits": { 00:22:20.094 "rw_ios_per_sec": 0, 00:22:20.094 "rw_mbytes_per_sec": 0, 00:22:20.094 "r_mbytes_per_sec": 0, 00:22:20.094 "w_mbytes_per_sec": 0 00:22:20.094 }, 00:22:20.094 "claimed": false, 00:22:20.094 "zoned": false, 00:22:20.094 "supported_io_types": { 00:22:20.094 "read": true, 00:22:20.094 "write": true, 00:22:20.094 "unmap": true, 00:22:20.094 "flush": true, 00:22:20.094 "reset": true, 00:22:20.094 "nvme_admin": false, 00:22:20.094 "nvme_io": false, 00:22:20.094 "nvme_io_md": false, 00:22:20.094 "write_zeroes": true, 00:22:20.094 "zcopy": true, 00:22:20.094 "get_zone_info": false, 00:22:20.094 "zone_management": false, 00:22:20.094 "zone_append": false, 00:22:20.094 "compare": false, 00:22:20.094 "compare_and_write": false, 00:22:20.094 "abort": true, 00:22:20.094 "seek_hole": false, 00:22:20.094 "seek_data": false, 00:22:20.094 "copy": true, 00:22:20.094 "nvme_iov_md": false 00:22:20.094 }, 00:22:20.094 "memory_domains": [ 00:22:20.094 { 00:22:20.094 "dma_device_id": "system", 00:22:20.094 "dma_device_type": 1 00:22:20.094 }, 00:22:20.094 { 00:22:20.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.094 "dma_device_type": 2 00:22:20.094 } 00:22:20.094 ], 00:22:20.094 "driver_specific": {} 00:22:20.094 } 00:22:20.094 ] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 BaseBdev3 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 [ 00:22:20.094 { 00:22:20.094 "name": "BaseBdev3", 00:22:20.094 "aliases": [ 00:22:20.094 "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf" 00:22:20.094 ], 00:22:20.094 "product_name": "Malloc disk", 00:22:20.094 "block_size": 512, 00:22:20.094 "num_blocks": 65536, 00:22:20.094 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:20.094 "assigned_rate_limits": { 00:22:20.094 "rw_ios_per_sec": 0, 00:22:20.094 "rw_mbytes_per_sec": 0, 00:22:20.094 "r_mbytes_per_sec": 0, 00:22:20.094 "w_mbytes_per_sec": 0 00:22:20.094 }, 00:22:20.094 "claimed": false, 00:22:20.094 "zoned": false, 00:22:20.094 "supported_io_types": { 00:22:20.094 "read": true, 00:22:20.094 "write": true, 00:22:20.094 "unmap": true, 00:22:20.094 "flush": true, 00:22:20.094 "reset": true, 00:22:20.094 "nvme_admin": false, 00:22:20.094 "nvme_io": false, 00:22:20.094 "nvme_io_md": false, 00:22:20.094 "write_zeroes": true, 00:22:20.094 "zcopy": true, 00:22:20.094 "get_zone_info": false, 00:22:20.094 "zone_management": false, 00:22:20.094 "zone_append": false, 00:22:20.094 "compare": false, 00:22:20.094 "compare_and_write": false, 00:22:20.094 "abort": true, 00:22:20.094 "seek_hole": false, 00:22:20.094 "seek_data": false, 00:22:20.094 "copy": true, 00:22:20.094 "nvme_iov_md": false 00:22:20.094 }, 00:22:20.094 "memory_domains": [ 00:22:20.094 { 00:22:20.094 "dma_device_id": "system", 00:22:20.094 "dma_device_type": 1 00:22:20.094 }, 00:22:20.094 { 00:22:20.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.094 "dma_device_type": 2 00:22:20.094 } 00:22:20.094 ], 00:22:20.094 "driver_specific": {} 00:22:20.094 } 00:22:20.094 ] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 BaseBdev4 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.094 [ 00:22:20.094 { 00:22:20.094 "name": "BaseBdev4", 00:22:20.094 "aliases": [ 00:22:20.094 "9b32825d-e951-4946-a0c8-a4aea7ef682c" 00:22:20.094 ], 00:22:20.094 "product_name": "Malloc disk", 00:22:20.094 "block_size": 512, 00:22:20.094 "num_blocks": 65536, 00:22:20.094 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:20.094 "assigned_rate_limits": { 00:22:20.094 "rw_ios_per_sec": 0, 00:22:20.094 "rw_mbytes_per_sec": 0, 00:22:20.094 "r_mbytes_per_sec": 0, 00:22:20.094 "w_mbytes_per_sec": 0 00:22:20.094 }, 00:22:20.094 "claimed": false, 00:22:20.094 "zoned": false, 00:22:20.094 "supported_io_types": { 00:22:20.094 "read": true, 00:22:20.094 "write": true, 00:22:20.094 "unmap": true, 00:22:20.094 "flush": true, 00:22:20.094 "reset": true, 00:22:20.094 "nvme_admin": false, 00:22:20.094 "nvme_io": false, 00:22:20.094 "nvme_io_md": false, 00:22:20.094 "write_zeroes": true, 00:22:20.094 "zcopy": true, 00:22:20.094 "get_zone_info": false, 00:22:20.094 "zone_management": false, 00:22:20.094 "zone_append": false, 00:22:20.094 "compare": false, 00:22:20.094 "compare_and_write": false, 00:22:20.094 "abort": true, 00:22:20.094 "seek_hole": false, 00:22:20.094 "seek_data": false, 00:22:20.094 "copy": true, 00:22:20.094 "nvme_iov_md": false 00:22:20.094 }, 00:22:20.094 "memory_domains": [ 00:22:20.094 { 00:22:20.094 "dma_device_id": "system", 00:22:20.094 "dma_device_type": 1 00:22:20.094 }, 00:22:20.094 { 00:22:20.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.094 "dma_device_type": 2 00:22:20.094 } 00:22:20.094 ], 00:22:20.094 "driver_specific": {} 00:22:20.094 } 00:22:20.094 ] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:20.094 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.095 [2024-12-09 23:03:55.401386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:20.095 [2024-12-09 23:03:55.401458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:20.095 [2024-12-09 23:03:55.401486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.095 [2024-12-09 23:03:55.403676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:20.095 [2024-12-09 23:03:55.403753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.095 "name": "Existed_Raid", 00:22:20.095 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:20.095 "strip_size_kb": 64, 00:22:20.095 "state": "configuring", 00:22:20.095 "raid_level": "concat", 00:22:20.095 "superblock": true, 00:22:20.095 "num_base_bdevs": 4, 00:22:20.095 "num_base_bdevs_discovered": 3, 00:22:20.095 "num_base_bdevs_operational": 4, 00:22:20.095 "base_bdevs_list": [ 00:22:20.095 { 00:22:20.095 "name": "BaseBdev1", 00:22:20.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.095 "is_configured": false, 00:22:20.095 "data_offset": 0, 00:22:20.095 "data_size": 0 00:22:20.095 }, 00:22:20.095 { 00:22:20.095 "name": "BaseBdev2", 00:22:20.095 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:20.095 "is_configured": true, 00:22:20.095 "data_offset": 2048, 00:22:20.095 "data_size": 63488 00:22:20.095 }, 00:22:20.095 { 00:22:20.095 "name": "BaseBdev3", 00:22:20.095 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:20.095 "is_configured": true, 00:22:20.095 "data_offset": 2048, 00:22:20.095 "data_size": 63488 00:22:20.095 }, 00:22:20.095 { 00:22:20.095 "name": "BaseBdev4", 00:22:20.095 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:20.095 "is_configured": true, 00:22:20.095 "data_offset": 2048, 00:22:20.095 "data_size": 63488 00:22:20.095 } 00:22:20.095 ] 00:22:20.095 }' 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.095 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.670 [2024-12-09 23:03:55.729448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.670 "name": "Existed_Raid", 00:22:20.670 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:20.670 "strip_size_kb": 64, 00:22:20.670 "state": "configuring", 00:22:20.670 "raid_level": "concat", 00:22:20.670 "superblock": true, 00:22:20.670 "num_base_bdevs": 4, 00:22:20.670 "num_base_bdevs_discovered": 2, 00:22:20.670 "num_base_bdevs_operational": 4, 00:22:20.670 "base_bdevs_list": [ 00:22:20.670 { 00:22:20.670 "name": "BaseBdev1", 00:22:20.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.670 "is_configured": false, 00:22:20.670 "data_offset": 0, 00:22:20.670 "data_size": 0 00:22:20.670 }, 00:22:20.670 { 00:22:20.670 "name": null, 00:22:20.670 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:20.670 "is_configured": false, 00:22:20.670 "data_offset": 0, 00:22:20.670 "data_size": 63488 00:22:20.670 }, 00:22:20.670 { 00:22:20.670 "name": "BaseBdev3", 00:22:20.670 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:20.670 "is_configured": true, 00:22:20.670 "data_offset": 2048, 00:22:20.670 "data_size": 63488 00:22:20.670 }, 00:22:20.670 { 00:22:20.670 "name": "BaseBdev4", 00:22:20.670 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:20.670 "is_configured": true, 00:22:20.670 "data_offset": 2048, 00:22:20.670 "data_size": 63488 00:22:20.670 } 00:22:20.670 ] 00:22:20.670 }' 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.670 23:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.932 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.932 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.932 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.933 [2024-12-09 23:03:56.133800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.933 BaseBdev1 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.933 [ 00:22:20.933 { 00:22:20.933 "name": "BaseBdev1", 00:22:20.933 "aliases": [ 00:22:20.933 "b272a48a-a2e7-4292-aae7-83607641a313" 00:22:20.933 ], 00:22:20.933 "product_name": "Malloc disk", 00:22:20.933 "block_size": 512, 00:22:20.933 "num_blocks": 65536, 00:22:20.933 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:20.933 "assigned_rate_limits": { 00:22:20.933 "rw_ios_per_sec": 0, 00:22:20.933 "rw_mbytes_per_sec": 0, 00:22:20.933 "r_mbytes_per_sec": 0, 00:22:20.933 "w_mbytes_per_sec": 0 00:22:20.933 }, 00:22:20.933 "claimed": true, 00:22:20.933 "claim_type": "exclusive_write", 00:22:20.933 "zoned": false, 00:22:20.933 "supported_io_types": { 00:22:20.933 "read": true, 00:22:20.933 "write": true, 00:22:20.933 "unmap": true, 00:22:20.933 "flush": true, 00:22:20.933 "reset": true, 00:22:20.933 "nvme_admin": false, 00:22:20.933 "nvme_io": false, 00:22:20.933 "nvme_io_md": false, 00:22:20.933 "write_zeroes": true, 00:22:20.933 "zcopy": true, 00:22:20.933 "get_zone_info": false, 00:22:20.933 "zone_management": false, 00:22:20.933 "zone_append": false, 00:22:20.933 "compare": false, 00:22:20.933 "compare_and_write": false, 00:22:20.933 "abort": true, 00:22:20.933 "seek_hole": false, 00:22:20.933 "seek_data": false, 00:22:20.933 "copy": true, 00:22:20.933 "nvme_iov_md": false 00:22:20.933 }, 00:22:20.933 "memory_domains": [ 00:22:20.933 { 00:22:20.933 "dma_device_id": "system", 00:22:20.933 "dma_device_type": 1 00:22:20.933 }, 00:22:20.933 { 00:22:20.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.933 "dma_device_type": 2 00:22:20.933 } 00:22:20.933 ], 00:22:20.933 "driver_specific": {} 00:22:20.933 } 00:22:20.933 ] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.933 "name": "Existed_Raid", 00:22:20.933 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:20.933 "strip_size_kb": 64, 00:22:20.933 "state": "configuring", 00:22:20.933 "raid_level": "concat", 00:22:20.933 "superblock": true, 00:22:20.933 "num_base_bdevs": 4, 00:22:20.933 "num_base_bdevs_discovered": 3, 00:22:20.933 "num_base_bdevs_operational": 4, 00:22:20.933 "base_bdevs_list": [ 00:22:20.933 { 00:22:20.933 "name": "BaseBdev1", 00:22:20.933 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:20.933 "is_configured": true, 00:22:20.933 "data_offset": 2048, 00:22:20.933 "data_size": 63488 00:22:20.933 }, 00:22:20.933 { 00:22:20.933 "name": null, 00:22:20.933 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:20.933 "is_configured": false, 00:22:20.933 "data_offset": 0, 00:22:20.933 "data_size": 63488 00:22:20.933 }, 00:22:20.933 { 00:22:20.933 "name": "BaseBdev3", 00:22:20.933 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:20.933 "is_configured": true, 00:22:20.933 "data_offset": 2048, 00:22:20.933 "data_size": 63488 00:22:20.933 }, 00:22:20.933 { 00:22:20.933 "name": "BaseBdev4", 00:22:20.933 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:20.933 "is_configured": true, 00:22:20.933 "data_offset": 2048, 00:22:20.933 "data_size": 63488 00:22:20.933 } 00:22:20.933 ] 00:22:20.933 }' 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.933 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.194 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.194 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.194 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.194 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:21.194 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.455 [2024-12-09 23:03:56.578039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.455 "name": "Existed_Raid", 00:22:21.455 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:21.455 "strip_size_kb": 64, 00:22:21.455 "state": "configuring", 00:22:21.455 "raid_level": "concat", 00:22:21.455 "superblock": true, 00:22:21.455 "num_base_bdevs": 4, 00:22:21.455 "num_base_bdevs_discovered": 2, 00:22:21.455 "num_base_bdevs_operational": 4, 00:22:21.455 "base_bdevs_list": [ 00:22:21.455 { 00:22:21.455 "name": "BaseBdev1", 00:22:21.455 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:21.455 "is_configured": true, 00:22:21.455 "data_offset": 2048, 00:22:21.455 "data_size": 63488 00:22:21.455 }, 00:22:21.455 { 00:22:21.455 "name": null, 00:22:21.455 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:21.455 "is_configured": false, 00:22:21.455 "data_offset": 0, 00:22:21.455 "data_size": 63488 00:22:21.455 }, 00:22:21.455 { 00:22:21.455 "name": null, 00:22:21.455 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:21.455 "is_configured": false, 00:22:21.455 "data_offset": 0, 00:22:21.455 "data_size": 63488 00:22:21.455 }, 00:22:21.455 { 00:22:21.455 "name": "BaseBdev4", 00:22:21.455 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:21.455 "is_configured": true, 00:22:21.455 "data_offset": 2048, 00:22:21.455 "data_size": 63488 00:22:21.455 } 00:22:21.455 ] 00:22:21.455 }' 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.455 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.715 [2024-12-09 23:03:56.946112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.715 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.716 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.716 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.716 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.716 "name": "Existed_Raid", 00:22:21.716 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:21.716 "strip_size_kb": 64, 00:22:21.716 "state": "configuring", 00:22:21.716 "raid_level": "concat", 00:22:21.716 "superblock": true, 00:22:21.716 "num_base_bdevs": 4, 00:22:21.716 "num_base_bdevs_discovered": 3, 00:22:21.716 "num_base_bdevs_operational": 4, 00:22:21.716 "base_bdevs_list": [ 00:22:21.716 { 00:22:21.716 "name": "BaseBdev1", 00:22:21.716 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:21.716 "is_configured": true, 00:22:21.716 "data_offset": 2048, 00:22:21.716 "data_size": 63488 00:22:21.716 }, 00:22:21.716 { 00:22:21.716 "name": null, 00:22:21.716 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:21.716 "is_configured": false, 00:22:21.716 "data_offset": 0, 00:22:21.716 "data_size": 63488 00:22:21.716 }, 00:22:21.716 { 00:22:21.716 "name": "BaseBdev3", 00:22:21.716 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:21.716 "is_configured": true, 00:22:21.716 "data_offset": 2048, 00:22:21.716 "data_size": 63488 00:22:21.716 }, 00:22:21.716 { 00:22:21.716 "name": "BaseBdev4", 00:22:21.716 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:21.716 "is_configured": true, 00:22:21.716 "data_offset": 2048, 00:22:21.716 "data_size": 63488 00:22:21.716 } 00:22:21.716 ] 00:22:21.716 }' 00:22:21.716 23:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.716 23:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.975 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.975 [2024-12-09 23:03:57.306246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.237 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.238 "name": "Existed_Raid", 00:22:22.238 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:22.238 "strip_size_kb": 64, 00:22:22.238 "state": "configuring", 00:22:22.238 "raid_level": "concat", 00:22:22.238 "superblock": true, 00:22:22.238 "num_base_bdevs": 4, 00:22:22.238 "num_base_bdevs_discovered": 2, 00:22:22.238 "num_base_bdevs_operational": 4, 00:22:22.238 "base_bdevs_list": [ 00:22:22.238 { 00:22:22.238 "name": null, 00:22:22.238 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:22.238 "is_configured": false, 00:22:22.238 "data_offset": 0, 00:22:22.238 "data_size": 63488 00:22:22.238 }, 00:22:22.238 { 00:22:22.238 "name": null, 00:22:22.238 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:22.238 "is_configured": false, 00:22:22.238 "data_offset": 0, 00:22:22.238 "data_size": 63488 00:22:22.238 }, 00:22:22.238 { 00:22:22.238 "name": "BaseBdev3", 00:22:22.238 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:22.238 "is_configured": true, 00:22:22.238 "data_offset": 2048, 00:22:22.238 "data_size": 63488 00:22:22.238 }, 00:22:22.238 { 00:22:22.238 "name": "BaseBdev4", 00:22:22.238 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:22.238 "is_configured": true, 00:22:22.238 "data_offset": 2048, 00:22:22.238 "data_size": 63488 00:22:22.238 } 00:22:22.238 ] 00:22:22.238 }' 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.238 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.501 [2024-12-09 23:03:57.794540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.501 "name": "Existed_Raid", 00:22:22.501 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:22.501 "strip_size_kb": 64, 00:22:22.501 "state": "configuring", 00:22:22.501 "raid_level": "concat", 00:22:22.501 "superblock": true, 00:22:22.501 "num_base_bdevs": 4, 00:22:22.501 "num_base_bdevs_discovered": 3, 00:22:22.501 "num_base_bdevs_operational": 4, 00:22:22.501 "base_bdevs_list": [ 00:22:22.501 { 00:22:22.501 "name": null, 00:22:22.501 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:22.501 "is_configured": false, 00:22:22.501 "data_offset": 0, 00:22:22.501 "data_size": 63488 00:22:22.501 }, 00:22:22.501 { 00:22:22.501 "name": "BaseBdev2", 00:22:22.501 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:22.501 "is_configured": true, 00:22:22.501 "data_offset": 2048, 00:22:22.501 "data_size": 63488 00:22:22.501 }, 00:22:22.501 { 00:22:22.501 "name": "BaseBdev3", 00:22:22.501 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:22.501 "is_configured": true, 00:22:22.501 "data_offset": 2048, 00:22:22.501 "data_size": 63488 00:22:22.501 }, 00:22:22.501 { 00:22:22.501 "name": "BaseBdev4", 00:22:22.501 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:22.501 "is_configured": true, 00:22:22.501 "data_offset": 2048, 00:22:22.501 "data_size": 63488 00:22:22.501 } 00:22:22.501 ] 00:22:22.501 }' 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.501 23:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b272a48a-a2e7-4292-aae7-83607641a313 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.072 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.072 [2024-12-09 23:03:58.251226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:23.072 [2024-12-09 23:03:58.251529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:23.072 [2024-12-09 23:03:58.251545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:23.072 [2024-12-09 23:03:58.251847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:23.072 [2024-12-09 23:03:58.252035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:23.072 [2024-12-09 23:03:58.252049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:23.072 NewBaseBdev 00:22:23.072 [2024-12-09 23:03:58.252213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.073 [ 00:22:23.073 { 00:22:23.073 "name": "NewBaseBdev", 00:22:23.073 "aliases": [ 00:22:23.073 "b272a48a-a2e7-4292-aae7-83607641a313" 00:22:23.073 ], 00:22:23.073 "product_name": "Malloc disk", 00:22:23.073 "block_size": 512, 00:22:23.073 "num_blocks": 65536, 00:22:23.073 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:23.073 "assigned_rate_limits": { 00:22:23.073 "rw_ios_per_sec": 0, 00:22:23.073 "rw_mbytes_per_sec": 0, 00:22:23.073 "r_mbytes_per_sec": 0, 00:22:23.073 "w_mbytes_per_sec": 0 00:22:23.073 }, 00:22:23.073 "claimed": true, 00:22:23.073 "claim_type": "exclusive_write", 00:22:23.073 "zoned": false, 00:22:23.073 "supported_io_types": { 00:22:23.073 "read": true, 00:22:23.073 "write": true, 00:22:23.073 "unmap": true, 00:22:23.073 "flush": true, 00:22:23.073 "reset": true, 00:22:23.073 "nvme_admin": false, 00:22:23.073 "nvme_io": false, 00:22:23.073 "nvme_io_md": false, 00:22:23.073 "write_zeroes": true, 00:22:23.073 "zcopy": true, 00:22:23.073 "get_zone_info": false, 00:22:23.073 "zone_management": false, 00:22:23.073 "zone_append": false, 00:22:23.073 "compare": false, 00:22:23.073 "compare_and_write": false, 00:22:23.073 "abort": true, 00:22:23.073 "seek_hole": false, 00:22:23.073 "seek_data": false, 00:22:23.073 "copy": true, 00:22:23.073 "nvme_iov_md": false 00:22:23.073 }, 00:22:23.073 "memory_domains": [ 00:22:23.073 { 00:22:23.073 "dma_device_id": "system", 00:22:23.073 "dma_device_type": 1 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.073 "dma_device_type": 2 00:22:23.073 } 00:22:23.073 ], 00:22:23.073 "driver_specific": {} 00:22:23.073 } 00:22:23.073 ] 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.073 "name": "Existed_Raid", 00:22:23.073 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:23.073 "strip_size_kb": 64, 00:22:23.073 "state": "online", 00:22:23.073 "raid_level": "concat", 00:22:23.073 "superblock": true, 00:22:23.073 "num_base_bdevs": 4, 00:22:23.073 "num_base_bdevs_discovered": 4, 00:22:23.073 "num_base_bdevs_operational": 4, 00:22:23.073 "base_bdevs_list": [ 00:22:23.073 { 00:22:23.073 "name": "NewBaseBdev", 00:22:23.073 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:23.073 "is_configured": true, 00:22:23.073 "data_offset": 2048, 00:22:23.073 "data_size": 63488 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "name": "BaseBdev2", 00:22:23.073 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:23.073 "is_configured": true, 00:22:23.073 "data_offset": 2048, 00:22:23.073 "data_size": 63488 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "name": "BaseBdev3", 00:22:23.073 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:23.073 "is_configured": true, 00:22:23.073 "data_offset": 2048, 00:22:23.073 "data_size": 63488 00:22:23.073 }, 00:22:23.073 { 00:22:23.073 "name": "BaseBdev4", 00:22:23.073 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:23.073 "is_configured": true, 00:22:23.073 "data_offset": 2048, 00:22:23.073 "data_size": 63488 00:22:23.073 } 00:22:23.073 ] 00:22:23.073 }' 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.073 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.343 [2024-12-09 23:03:58.639847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.343 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:23.343 "name": "Existed_Raid", 00:22:23.343 "aliases": [ 00:22:23.343 "d324a097-3668-47ed-8af6-0454d47af488" 00:22:23.343 ], 00:22:23.343 "product_name": "Raid Volume", 00:22:23.343 "block_size": 512, 00:22:23.343 "num_blocks": 253952, 00:22:23.343 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:23.343 "assigned_rate_limits": { 00:22:23.343 "rw_ios_per_sec": 0, 00:22:23.343 "rw_mbytes_per_sec": 0, 00:22:23.343 "r_mbytes_per_sec": 0, 00:22:23.343 "w_mbytes_per_sec": 0 00:22:23.343 }, 00:22:23.343 "claimed": false, 00:22:23.343 "zoned": false, 00:22:23.343 "supported_io_types": { 00:22:23.343 "read": true, 00:22:23.343 "write": true, 00:22:23.343 "unmap": true, 00:22:23.343 "flush": true, 00:22:23.343 "reset": true, 00:22:23.343 "nvme_admin": false, 00:22:23.343 "nvme_io": false, 00:22:23.343 "nvme_io_md": false, 00:22:23.343 "write_zeroes": true, 00:22:23.343 "zcopy": false, 00:22:23.343 "get_zone_info": false, 00:22:23.343 "zone_management": false, 00:22:23.343 "zone_append": false, 00:22:23.343 "compare": false, 00:22:23.343 "compare_and_write": false, 00:22:23.343 "abort": false, 00:22:23.343 "seek_hole": false, 00:22:23.343 "seek_data": false, 00:22:23.343 "copy": false, 00:22:23.343 "nvme_iov_md": false 00:22:23.343 }, 00:22:23.343 "memory_domains": [ 00:22:23.343 { 00:22:23.343 "dma_device_id": "system", 00:22:23.343 "dma_device_type": 1 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.343 "dma_device_type": 2 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "system", 00:22:23.343 "dma_device_type": 1 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.343 "dma_device_type": 2 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "system", 00:22:23.343 "dma_device_type": 1 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.343 "dma_device_type": 2 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "system", 00:22:23.343 "dma_device_type": 1 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.343 "dma_device_type": 2 00:22:23.343 } 00:22:23.343 ], 00:22:23.343 "driver_specific": { 00:22:23.343 "raid": { 00:22:23.343 "uuid": "d324a097-3668-47ed-8af6-0454d47af488", 00:22:23.343 "strip_size_kb": 64, 00:22:23.343 "state": "online", 00:22:23.343 "raid_level": "concat", 00:22:23.343 "superblock": true, 00:22:23.343 "num_base_bdevs": 4, 00:22:23.343 "num_base_bdevs_discovered": 4, 00:22:23.343 "num_base_bdevs_operational": 4, 00:22:23.343 "base_bdevs_list": [ 00:22:23.343 { 00:22:23.343 "name": "NewBaseBdev", 00:22:23.343 "uuid": "b272a48a-a2e7-4292-aae7-83607641a313", 00:22:23.343 "is_configured": true, 00:22:23.343 "data_offset": 2048, 00:22:23.343 "data_size": 63488 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "name": "BaseBdev2", 00:22:23.343 "uuid": "8bdf37f2-bcc7-4011-946c-83cc485f71f6", 00:22:23.343 "is_configured": true, 00:22:23.343 "data_offset": 2048, 00:22:23.343 "data_size": 63488 00:22:23.343 }, 00:22:23.343 { 00:22:23.343 "name": "BaseBdev3", 00:22:23.344 "uuid": "9ccf36c5-a5d2-4128-8ce1-d0e85ec78fcf", 00:22:23.344 "is_configured": true, 00:22:23.344 "data_offset": 2048, 00:22:23.344 "data_size": 63488 00:22:23.344 }, 00:22:23.344 { 00:22:23.344 "name": "BaseBdev4", 00:22:23.344 "uuid": "9b32825d-e951-4946-a0c8-a4aea7ef682c", 00:22:23.344 "is_configured": true, 00:22:23.344 "data_offset": 2048, 00:22:23.344 "data_size": 63488 00:22:23.344 } 00:22:23.344 ] 00:22:23.344 } 00:22:23.344 } 00:22:23.344 }' 00:22:23.344 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:23.609 BaseBdev2 00:22:23.609 BaseBdev3 00:22:23.609 BaseBdev4' 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.609 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.610 [2024-12-09 23:03:58.875486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:23.610 [2024-12-09 23:03:58.875535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.610 [2024-12-09 23:03:58.875627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.610 [2024-12-09 23:03:58.875711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.610 [2024-12-09 23:03:58.875723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70135 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70135 ']' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70135 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70135 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.610 killing process with pid 70135 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70135' 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70135 00:22:23.610 [2024-12-09 23:03:58.907924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:23.610 23:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70135 00:22:23.872 [2024-12-09 23:03:59.189335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:24.821 ************************************ 00:22:24.821 END TEST raid_state_function_test_sb 00:22:24.821 23:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:24.821 00:22:24.821 real 0m9.025s 00:22:24.821 user 0m14.135s 00:22:24.821 sys 0m1.566s 00:22:24.821 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.821 23:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.821 ************************************ 00:22:24.821 23:04:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:22:24.821 23:04:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:24.821 23:04:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.821 23:04:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.821 ************************************ 00:22:24.821 START TEST raid_superblock_test 00:22:24.821 ************************************ 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70778 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70778 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70778 ']' 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:24.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.821 23:04:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.821 [2024-12-09 23:04:00.175690] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:24.821 [2024-12-09 23:04:00.175862] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:22:25.082 [2024-12-09 23:04:00.337871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.341 [2024-12-09 23:04:00.462430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.341 [2024-12-09 23:04:00.628216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.341 [2024-12-09 23:04:00.628303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:25.912 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 malloc1 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 [2024-12-09 23:04:01.106588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:25.913 [2024-12-09 23:04:01.106677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.913 [2024-12-09 23:04:01.106702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:25.913 [2024-12-09 23:04:01.106713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.913 [2024-12-09 23:04:01.109345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.913 [2024-12-09 23:04:01.109402] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:25.913 pt1 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 malloc2 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 [2024-12-09 23:04:01.154115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.913 [2024-12-09 23:04:01.154207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.913 [2024-12-09 23:04:01.154241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:25.913 [2024-12-09 23:04:01.154252] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.913 [2024-12-09 23:04:01.156961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.913 [2024-12-09 23:04:01.157020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.913 pt2 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 malloc3 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 [2024-12-09 23:04:01.211250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:25.913 [2024-12-09 23:04:01.211345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.913 [2024-12-09 23:04:01.211374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:25.913 [2024-12-09 23:04:01.211385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.913 [2024-12-09 23:04:01.213988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.913 [2024-12-09 23:04:01.214049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:25.913 pt3 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 malloc4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 [2024-12-09 23:04:01.257489] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:25.913 [2024-12-09 23:04:01.257572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.913 [2024-12-09 23:04:01.257593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:25.913 [2024-12-09 23:04:01.257602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.913 [2024-12-09 23:04:01.260145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.913 [2024-12-09 23:04:01.260195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:25.913 pt4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.913 [2024-12-09 23:04:01.265541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:25.913 [2024-12-09 23:04:01.267803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.913 [2024-12-09 23:04:01.267939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:25.913 [2024-12-09 23:04:01.268002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:25.913 [2024-12-09 23:04:01.268296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:25.913 [2024-12-09 23:04:01.268320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:25.913 [2024-12-09 23:04:01.268743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:25.913 [2024-12-09 23:04:01.268957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:25.913 [2024-12-09 23:04:01.268981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:25.913 [2024-12-09 23:04:01.269202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.913 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.211 "name": "raid_bdev1", 00:22:26.211 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:26.211 "strip_size_kb": 64, 00:22:26.211 "state": "online", 00:22:26.211 "raid_level": "concat", 00:22:26.211 "superblock": true, 00:22:26.211 "num_base_bdevs": 4, 00:22:26.211 "num_base_bdevs_discovered": 4, 00:22:26.211 "num_base_bdevs_operational": 4, 00:22:26.211 "base_bdevs_list": [ 00:22:26.211 { 00:22:26.211 "name": "pt1", 00:22:26.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.211 "is_configured": true, 00:22:26.211 "data_offset": 2048, 00:22:26.211 "data_size": 63488 00:22:26.211 }, 00:22:26.211 { 00:22:26.211 "name": "pt2", 00:22:26.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.211 "is_configured": true, 00:22:26.211 "data_offset": 2048, 00:22:26.211 "data_size": 63488 00:22:26.211 }, 00:22:26.211 { 00:22:26.211 "name": "pt3", 00:22:26.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.211 "is_configured": true, 00:22:26.211 "data_offset": 2048, 00:22:26.211 "data_size": 63488 00:22:26.211 }, 00:22:26.211 { 00:22:26.211 "name": "pt4", 00:22:26.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:26.211 "is_configured": true, 00:22:26.211 "data_offset": 2048, 00:22:26.211 "data_size": 63488 00:22:26.211 } 00:22:26.211 ] 00:22:26.211 }' 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.211 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.485 [2024-12-09 23:04:01.601947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.485 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:26.485 "name": "raid_bdev1", 00:22:26.485 "aliases": [ 00:22:26.485 "86ce18a7-473e-4589-97af-747a5931c644" 00:22:26.485 ], 00:22:26.485 "product_name": "Raid Volume", 00:22:26.485 "block_size": 512, 00:22:26.485 "num_blocks": 253952, 00:22:26.485 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:26.485 "assigned_rate_limits": { 00:22:26.485 "rw_ios_per_sec": 0, 00:22:26.485 "rw_mbytes_per_sec": 0, 00:22:26.485 "r_mbytes_per_sec": 0, 00:22:26.485 "w_mbytes_per_sec": 0 00:22:26.485 }, 00:22:26.485 "claimed": false, 00:22:26.485 "zoned": false, 00:22:26.485 "supported_io_types": { 00:22:26.485 "read": true, 00:22:26.485 "write": true, 00:22:26.485 "unmap": true, 00:22:26.485 "flush": true, 00:22:26.485 "reset": true, 00:22:26.485 "nvme_admin": false, 00:22:26.485 "nvme_io": false, 00:22:26.485 "nvme_io_md": false, 00:22:26.485 "write_zeroes": true, 00:22:26.485 "zcopy": false, 00:22:26.485 "get_zone_info": false, 00:22:26.485 "zone_management": false, 00:22:26.485 "zone_append": false, 00:22:26.485 "compare": false, 00:22:26.485 "compare_and_write": false, 00:22:26.485 "abort": false, 00:22:26.485 "seek_hole": false, 00:22:26.485 "seek_data": false, 00:22:26.485 "copy": false, 00:22:26.485 "nvme_iov_md": false 00:22:26.485 }, 00:22:26.485 "memory_domains": [ 00:22:26.485 { 00:22:26.485 "dma_device_id": "system", 00:22:26.485 "dma_device_type": 1 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.485 "dma_device_type": 2 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "system", 00:22:26.485 "dma_device_type": 1 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.485 "dma_device_type": 2 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "system", 00:22:26.485 "dma_device_type": 1 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.485 "dma_device_type": 2 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "system", 00:22:26.485 "dma_device_type": 1 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.485 "dma_device_type": 2 00:22:26.485 } 00:22:26.485 ], 00:22:26.485 "driver_specific": { 00:22:26.485 "raid": { 00:22:26.485 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:26.485 "strip_size_kb": 64, 00:22:26.485 "state": "online", 00:22:26.485 "raid_level": "concat", 00:22:26.485 "superblock": true, 00:22:26.485 "num_base_bdevs": 4, 00:22:26.485 "num_base_bdevs_discovered": 4, 00:22:26.485 "num_base_bdevs_operational": 4, 00:22:26.485 "base_bdevs_list": [ 00:22:26.485 { 00:22:26.485 "name": "pt1", 00:22:26.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.485 "is_configured": true, 00:22:26.485 "data_offset": 2048, 00:22:26.485 "data_size": 63488 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "name": "pt2", 00:22:26.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.485 "is_configured": true, 00:22:26.485 "data_offset": 2048, 00:22:26.485 "data_size": 63488 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "name": "pt3", 00:22:26.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.485 "is_configured": true, 00:22:26.485 "data_offset": 2048, 00:22:26.485 "data_size": 63488 00:22:26.485 }, 00:22:26.485 { 00:22:26.485 "name": "pt4", 00:22:26.485 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:26.486 "is_configured": true, 00:22:26.486 "data_offset": 2048, 00:22:26.486 "data_size": 63488 00:22:26.486 } 00:22:26.486 ] 00:22:26.486 } 00:22:26.486 } 00:22:26.486 }' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:26.486 pt2 00:22:26.486 pt3 00:22:26.486 pt4' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:26.486 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 [2024-12-09 23:04:01.845995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=86ce18a7-473e-4589-97af-747a5931c644 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 86ce18a7-473e-4589-97af-747a5931c644 ']' 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 [2024-12-09 23:04:01.873648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.750 [2024-12-09 23:04:01.873681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.750 [2024-12-09 23:04:01.873781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.750 [2024-12-09 23:04:01.873866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.750 [2024-12-09 23:04:01.873883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:26.750 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.751 23:04:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 [2024-12-09 23:04:02.001715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:26.751 [2024-12-09 23:04:02.003999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:26.751 [2024-12-09 23:04:02.004068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:26.751 [2024-12-09 23:04:02.004137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:26.751 [2024-12-09 23:04:02.004201] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:26.751 [2024-12-09 23:04:02.004269] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:26.751 [2024-12-09 23:04:02.004290] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:26.751 [2024-12-09 23:04:02.004309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:26.751 [2024-12-09 23:04:02.004323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.751 [2024-12-09 23:04:02.004357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:26.751 request: 00:22:26.751 { 00:22:26.751 "name": "raid_bdev1", 00:22:26.751 "raid_level": "concat", 00:22:26.751 "base_bdevs": [ 00:22:26.751 "malloc1", 00:22:26.751 "malloc2", 00:22:26.751 "malloc3", 00:22:26.751 "malloc4" 00:22:26.751 ], 00:22:26.751 "strip_size_kb": 64, 00:22:26.751 "superblock": false, 00:22:26.751 "method": "bdev_raid_create", 00:22:26.751 "req_id": 1 00:22:26.751 } 00:22:26.751 Got JSON-RPC error response 00:22:26.751 response: 00:22:26.751 { 00:22:26.751 "code": -17, 00:22:26.751 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:26.751 } 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 [2024-12-09 23:04:02.045694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:26.751 [2024-12-09 23:04:02.045779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.751 [2024-12-09 23:04:02.045805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:26.751 [2024-12-09 23:04:02.045818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.751 [2024-12-09 23:04:02.048485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.751 [2024-12-09 23:04:02.048545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:26.751 [2024-12-09 23:04:02.048673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:26.751 [2024-12-09 23:04:02.048741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:26.751 pt1 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.751 "name": "raid_bdev1", 00:22:26.751 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:26.751 "strip_size_kb": 64, 00:22:26.751 "state": "configuring", 00:22:26.751 "raid_level": "concat", 00:22:26.751 "superblock": true, 00:22:26.751 "num_base_bdevs": 4, 00:22:26.751 "num_base_bdevs_discovered": 1, 00:22:26.751 "num_base_bdevs_operational": 4, 00:22:26.751 "base_bdevs_list": [ 00:22:26.751 { 00:22:26.751 "name": "pt1", 00:22:26.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:26.751 "is_configured": true, 00:22:26.751 "data_offset": 2048, 00:22:26.751 "data_size": 63488 00:22:26.751 }, 00:22:26.751 { 00:22:26.751 "name": null, 00:22:26.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.751 "is_configured": false, 00:22:26.751 "data_offset": 2048, 00:22:26.751 "data_size": 63488 00:22:26.751 }, 00:22:26.751 { 00:22:26.751 "name": null, 00:22:26.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:26.751 "is_configured": false, 00:22:26.751 "data_offset": 2048, 00:22:26.751 "data_size": 63488 00:22:26.751 }, 00:22:26.751 { 00:22:26.751 "name": null, 00:22:26.751 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:26.751 "is_configured": false, 00:22:26.751 "data_offset": 2048, 00:22:26.751 "data_size": 63488 00:22:26.751 } 00:22:26.751 ] 00:22:26.751 }' 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.751 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.013 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:27.013 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:27.013 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.013 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.013 [2024-12-09 23:04:02.369797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:27.013 [2024-12-09 23:04:02.370153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.013 [2024-12-09 23:04:02.370209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:27.013 [2024-12-09 23:04:02.370433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.013 [2024-12-09 23:04:02.370996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.013 [2024-12-09 23:04:02.371152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:27.013 [2024-12-09 23:04:02.371291] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:27.013 [2024-12-09 23:04:02.371341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:27.276 pt2 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.276 [2024-12-09 23:04:02.377783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.276 "name": "raid_bdev1", 00:22:27.276 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:27.276 "strip_size_kb": 64, 00:22:27.276 "state": "configuring", 00:22:27.276 "raid_level": "concat", 00:22:27.276 "superblock": true, 00:22:27.276 "num_base_bdevs": 4, 00:22:27.276 "num_base_bdevs_discovered": 1, 00:22:27.276 "num_base_bdevs_operational": 4, 00:22:27.276 "base_bdevs_list": [ 00:22:27.276 { 00:22:27.276 "name": "pt1", 00:22:27.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.276 "is_configured": true, 00:22:27.276 "data_offset": 2048, 00:22:27.276 "data_size": 63488 00:22:27.276 }, 00:22:27.276 { 00:22:27.276 "name": null, 00:22:27.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.276 "is_configured": false, 00:22:27.276 "data_offset": 0, 00:22:27.276 "data_size": 63488 00:22:27.276 }, 00:22:27.276 { 00:22:27.276 "name": null, 00:22:27.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.276 "is_configured": false, 00:22:27.276 "data_offset": 2048, 00:22:27.276 "data_size": 63488 00:22:27.276 }, 00:22:27.276 { 00:22:27.276 "name": null, 00:22:27.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:27.276 "is_configured": false, 00:22:27.276 "data_offset": 2048, 00:22:27.276 "data_size": 63488 00:22:27.276 } 00:22:27.276 ] 00:22:27.276 }' 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.276 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 [2024-12-09 23:04:02.705860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:27.549 [2024-12-09 23:04:02.705949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.549 [2024-12-09 23:04:02.705970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:27.549 [2024-12-09 23:04:02.705981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.549 [2024-12-09 23:04:02.706512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.549 [2024-12-09 23:04:02.706531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:27.549 [2024-12-09 23:04:02.706627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:27.549 [2024-12-09 23:04:02.706651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:27.549 pt2 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 [2024-12-09 23:04:02.713856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:27.549 [2024-12-09 23:04:02.714088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.549 [2024-12-09 23:04:02.714133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:27.549 [2024-12-09 23:04:02.714144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.549 [2024-12-09 23:04:02.714647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.549 [2024-12-09 23:04:02.714674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:27.549 [2024-12-09 23:04:02.714767] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:27.549 [2024-12-09 23:04:02.714796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:27.549 pt3 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 [2024-12-09 23:04:02.721822] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:27.549 [2024-12-09 23:04:02.721891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.549 [2024-12-09 23:04:02.721912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:27.549 [2024-12-09 23:04:02.721923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.549 [2024-12-09 23:04:02.722452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.549 [2024-12-09 23:04:02.722471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:27.549 [2024-12-09 23:04:02.722563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:27.549 [2024-12-09 23:04:02.722589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:27.549 [2024-12-09 23:04:02.722745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:27.549 [2024-12-09 23:04:02.722754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:27.549 [2024-12-09 23:04:02.723041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:27.549 [2024-12-09 23:04:02.723231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:27.549 [2024-12-09 23:04:02.723243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:27.549 [2024-12-09 23:04:02.723386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.549 pt4 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.549 "name": "raid_bdev1", 00:22:27.549 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:27.549 "strip_size_kb": 64, 00:22:27.549 "state": "online", 00:22:27.549 "raid_level": "concat", 00:22:27.549 "superblock": true, 00:22:27.549 "num_base_bdevs": 4, 00:22:27.549 "num_base_bdevs_discovered": 4, 00:22:27.549 "num_base_bdevs_operational": 4, 00:22:27.549 "base_bdevs_list": [ 00:22:27.549 { 00:22:27.549 "name": "pt1", 00:22:27.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.549 "is_configured": true, 00:22:27.549 "data_offset": 2048, 00:22:27.549 "data_size": 63488 00:22:27.549 }, 00:22:27.549 { 00:22:27.549 "name": "pt2", 00:22:27.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.549 "is_configured": true, 00:22:27.549 "data_offset": 2048, 00:22:27.549 "data_size": 63488 00:22:27.549 }, 00:22:27.549 { 00:22:27.549 "name": "pt3", 00:22:27.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.549 "is_configured": true, 00:22:27.549 "data_offset": 2048, 00:22:27.549 "data_size": 63488 00:22:27.549 }, 00:22:27.549 { 00:22:27.549 "name": "pt4", 00:22:27.549 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:27.549 "is_configured": true, 00:22:27.549 "data_offset": 2048, 00:22:27.549 "data_size": 63488 00:22:27.549 } 00:22:27.549 ] 00:22:27.549 }' 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.549 23:04:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:27.811 [2024-12-09 23:04:03.078357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.811 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:27.811 "name": "raid_bdev1", 00:22:27.811 "aliases": [ 00:22:27.812 "86ce18a7-473e-4589-97af-747a5931c644" 00:22:27.812 ], 00:22:27.812 "product_name": "Raid Volume", 00:22:27.812 "block_size": 512, 00:22:27.812 "num_blocks": 253952, 00:22:27.812 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:27.812 "assigned_rate_limits": { 00:22:27.812 "rw_ios_per_sec": 0, 00:22:27.812 "rw_mbytes_per_sec": 0, 00:22:27.812 "r_mbytes_per_sec": 0, 00:22:27.812 "w_mbytes_per_sec": 0 00:22:27.812 }, 00:22:27.812 "claimed": false, 00:22:27.812 "zoned": false, 00:22:27.812 "supported_io_types": { 00:22:27.812 "read": true, 00:22:27.812 "write": true, 00:22:27.812 "unmap": true, 00:22:27.812 "flush": true, 00:22:27.812 "reset": true, 00:22:27.812 "nvme_admin": false, 00:22:27.812 "nvme_io": false, 00:22:27.812 "nvme_io_md": false, 00:22:27.812 "write_zeroes": true, 00:22:27.812 "zcopy": false, 00:22:27.812 "get_zone_info": false, 00:22:27.812 "zone_management": false, 00:22:27.812 "zone_append": false, 00:22:27.812 "compare": false, 00:22:27.812 "compare_and_write": false, 00:22:27.812 "abort": false, 00:22:27.812 "seek_hole": false, 00:22:27.812 "seek_data": false, 00:22:27.812 "copy": false, 00:22:27.812 "nvme_iov_md": false 00:22:27.812 }, 00:22:27.812 "memory_domains": [ 00:22:27.812 { 00:22:27.812 "dma_device_id": "system", 00:22:27.812 "dma_device_type": 1 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.812 "dma_device_type": 2 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "system", 00:22:27.812 "dma_device_type": 1 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.812 "dma_device_type": 2 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "system", 00:22:27.812 "dma_device_type": 1 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.812 "dma_device_type": 2 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "system", 00:22:27.812 "dma_device_type": 1 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.812 "dma_device_type": 2 00:22:27.812 } 00:22:27.812 ], 00:22:27.812 "driver_specific": { 00:22:27.812 "raid": { 00:22:27.812 "uuid": "86ce18a7-473e-4589-97af-747a5931c644", 00:22:27.812 "strip_size_kb": 64, 00:22:27.812 "state": "online", 00:22:27.812 "raid_level": "concat", 00:22:27.812 "superblock": true, 00:22:27.812 "num_base_bdevs": 4, 00:22:27.812 "num_base_bdevs_discovered": 4, 00:22:27.812 "num_base_bdevs_operational": 4, 00:22:27.812 "base_bdevs_list": [ 00:22:27.812 { 00:22:27.812 "name": "pt1", 00:22:27.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.812 "is_configured": true, 00:22:27.812 "data_offset": 2048, 00:22:27.812 "data_size": 63488 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "name": "pt2", 00:22:27.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.812 "is_configured": true, 00:22:27.812 "data_offset": 2048, 00:22:27.812 "data_size": 63488 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "name": "pt3", 00:22:27.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:27.812 "is_configured": true, 00:22:27.812 "data_offset": 2048, 00:22:27.812 "data_size": 63488 00:22:27.812 }, 00:22:27.812 { 00:22:27.812 "name": "pt4", 00:22:27.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:27.812 "is_configured": true, 00:22:27.812 "data_offset": 2048, 00:22:27.812 "data_size": 63488 00:22:27.812 } 00:22:27.812 ] 00:22:27.812 } 00:22:27.812 } 00:22:27.812 }' 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:27.812 pt2 00:22:27.812 pt3 00:22:27.812 pt4' 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.812 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:28.075 [2024-12-09 23:04:03.314359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 86ce18a7-473e-4589-97af-747a5931c644 '!=' 86ce18a7-473e-4589-97af-747a5931c644 ']' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70778 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70778 ']' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70778 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70778 00:22:28.075 killing process with pid 70778 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70778' 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70778 00:22:28.075 23:04:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70778 00:22:28.075 [2024-12-09 23:04:03.373921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:28.075 [2024-12-09 23:04:03.374044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.075 [2024-12-09 23:04:03.374157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.075 [2024-12-09 23:04:03.374169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:28.338 [2024-12-09 23:04:03.651280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:29.283 23:04:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:29.283 00:22:29.283 real 0m4.363s 00:22:29.283 user 0m6.013s 00:22:29.283 sys 0m0.867s 00:22:29.283 ************************************ 00:22:29.283 END TEST raid_superblock_test 00:22:29.283 ************************************ 00:22:29.283 23:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.283 23:04:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.283 23:04:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:22:29.283 23:04:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:29.283 23:04:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.283 23:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:29.283 ************************************ 00:22:29.283 START TEST raid_read_error_test 00:22:29.283 ************************************ 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UX6BCA5KLa 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71026 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71026 00:22:29.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71026 ']' 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.283 23:04:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:29.283 [2024-12-09 23:04:04.621156] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:29.284 [2024-12-09 23:04:04.621531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71026 ] 00:22:29.545 [2024-12-09 23:04:04.778998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.807 [2024-12-09 23:04:04.921207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.807 [2024-12-09 23:04:05.086315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.807 [2024-12-09 23:04:05.086387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 BaseBdev1_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 true 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 [2024-12-09 23:04:05.547956] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:30.379 [2024-12-09 23:04:05.548033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.379 [2024-12-09 23:04:05.548058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:30.379 [2024-12-09 23:04:05.548071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.379 [2024-12-09 23:04:05.550643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.379 [2024-12-09 23:04:05.550870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:30.379 BaseBdev1 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 BaseBdev2_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 true 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 [2024-12-09 23:04:05.601840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:30.379 [2024-12-09 23:04:05.602076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.379 [2024-12-09 23:04:05.602129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:30.379 [2024-12-09 23:04:05.602143] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.379 [2024-12-09 23:04:05.604639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.379 [2024-12-09 23:04:05.604709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:30.379 BaseBdev2 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 BaseBdev3_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 true 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 [2024-12-09 23:04:05.661403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:30.379 [2024-12-09 23:04:05.661477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.379 [2024-12-09 23:04:05.661500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:30.379 [2024-12-09 23:04:05.661511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.379 [2024-12-09 23:04:05.664020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.379 [2024-12-09 23:04:05.664241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:30.379 BaseBdev3 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:30.379 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.380 BaseBdev4_malloc 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.380 true 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.380 [2024-12-09 23:04:05.710880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:30.380 [2024-12-09 23:04:05.710949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.380 [2024-12-09 23:04:05.710969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:30.380 [2024-12-09 23:04:05.710980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.380 [2024-12-09 23:04:05.713487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.380 [2024-12-09 23:04:05.713541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:30.380 BaseBdev4 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.380 [2024-12-09 23:04:05.718961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.380 [2024-12-09 23:04:05.721329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.380 [2024-12-09 23:04:05.721423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:30.380 [2024-12-09 23:04:05.721496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:30.380 [2024-12-09 23:04:05.721747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:30.380 [2024-12-09 23:04:05.721763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:30.380 [2024-12-09 23:04:05.722050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:30.380 [2024-12-09 23:04:05.722277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:30.380 [2024-12-09 23:04:05.722292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:30.380 [2024-12-09 23:04:05.722459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.380 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.640 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.640 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.640 "name": "raid_bdev1", 00:22:30.640 "uuid": "86b5308d-ae57-4973-b1a5-2d85619e4b78", 00:22:30.640 "strip_size_kb": 64, 00:22:30.640 "state": "online", 00:22:30.640 "raid_level": "concat", 00:22:30.640 "superblock": true, 00:22:30.640 "num_base_bdevs": 4, 00:22:30.640 "num_base_bdevs_discovered": 4, 00:22:30.640 "num_base_bdevs_operational": 4, 00:22:30.640 "base_bdevs_list": [ 00:22:30.640 { 00:22:30.640 "name": "BaseBdev1", 00:22:30.640 "uuid": "07f87b4a-5be9-5084-9316-6f41d6869f7a", 00:22:30.640 "is_configured": true, 00:22:30.640 "data_offset": 2048, 00:22:30.640 "data_size": 63488 00:22:30.640 }, 00:22:30.640 { 00:22:30.640 "name": "BaseBdev2", 00:22:30.640 "uuid": "61799e94-9101-5c46-b1cb-9762e1fce0de", 00:22:30.640 "is_configured": true, 00:22:30.640 "data_offset": 2048, 00:22:30.640 "data_size": 63488 00:22:30.640 }, 00:22:30.640 { 00:22:30.640 "name": "BaseBdev3", 00:22:30.640 "uuid": "fa5eb6b2-29a5-510b-b942-b95ad0744a1f", 00:22:30.640 "is_configured": true, 00:22:30.640 "data_offset": 2048, 00:22:30.640 "data_size": 63488 00:22:30.640 }, 00:22:30.640 { 00:22:30.640 "name": "BaseBdev4", 00:22:30.640 "uuid": "2e45fd1c-cb06-531e-9514-3bf94aef2706", 00:22:30.640 "is_configured": true, 00:22:30.640 "data_offset": 2048, 00:22:30.640 "data_size": 63488 00:22:30.640 } 00:22:30.640 ] 00:22:30.640 }' 00:22:30.640 23:04:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.640 23:04:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.909 23:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:30.909 23:04:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:30.909 [2024-12-09 23:04:06.168183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.849 "name": "raid_bdev1", 00:22:31.849 "uuid": "86b5308d-ae57-4973-b1a5-2d85619e4b78", 00:22:31.849 "strip_size_kb": 64, 00:22:31.849 "state": "online", 00:22:31.849 "raid_level": "concat", 00:22:31.849 "superblock": true, 00:22:31.849 "num_base_bdevs": 4, 00:22:31.849 "num_base_bdevs_discovered": 4, 00:22:31.849 "num_base_bdevs_operational": 4, 00:22:31.849 "base_bdevs_list": [ 00:22:31.849 { 00:22:31.849 "name": "BaseBdev1", 00:22:31.849 "uuid": "07f87b4a-5be9-5084-9316-6f41d6869f7a", 00:22:31.849 "is_configured": true, 00:22:31.849 "data_offset": 2048, 00:22:31.849 "data_size": 63488 00:22:31.849 }, 00:22:31.849 { 00:22:31.849 "name": "BaseBdev2", 00:22:31.849 "uuid": "61799e94-9101-5c46-b1cb-9762e1fce0de", 00:22:31.849 "is_configured": true, 00:22:31.849 "data_offset": 2048, 00:22:31.849 "data_size": 63488 00:22:31.849 }, 00:22:31.849 { 00:22:31.849 "name": "BaseBdev3", 00:22:31.849 "uuid": "fa5eb6b2-29a5-510b-b942-b95ad0744a1f", 00:22:31.849 "is_configured": true, 00:22:31.849 "data_offset": 2048, 00:22:31.849 "data_size": 63488 00:22:31.849 }, 00:22:31.849 { 00:22:31.849 "name": "BaseBdev4", 00:22:31.849 "uuid": "2e45fd1c-cb06-531e-9514-3bf94aef2706", 00:22:31.849 "is_configured": true, 00:22:31.849 "data_offset": 2048, 00:22:31.849 "data_size": 63488 00:22:31.849 } 00:22:31.849 ] 00:22:31.849 }' 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.849 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.111 [2024-12-09 23:04:07.464755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.111 [2024-12-09 23:04:07.465008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.111 [2024-12-09 23:04:07.468463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.111 [2024-12-09 23:04:07.468566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.111 [2024-12-09 23:04:07.468764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.111 [2024-12-09 23:04:07.468915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.111 { 00:22:32.111 "results": [ 00:22:32.111 { 00:22:32.111 "job": "raid_bdev1", 00:22:32.111 "core_mask": "0x1", 00:22:32.111 "workload": "randrw", 00:22:32.111 "percentage": 50, 00:22:32.111 "status": "finished", 00:22:32.111 "queue_depth": 1, 00:22:32.111 "io_size": 131072, 00:22:32.111 "runtime": 1.294732, 00:22:32.111 "iops": 12010.207517849254, 00:22:32.111 "mibps": 1501.2759397311568, 00:22:32.111 "io_failed": 1, 00:22:32.111 "io_timeout": 0, 00:22:32.111 "avg_latency_us": 115.41028892527315, 00:22:32.111 "min_latency_us": 34.46153846153846, 00:22:32.111 "max_latency_us": 1714.0184615384615 00:22:32.111 } 00:22:32.111 ], 00:22:32.111 "core_count": 1 00:22:32.111 } 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71026 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71026 ']' 00:22:32.111 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71026 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71026 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71026' 00:22:32.372 killing process with pid 71026 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71026 00:22:32.372 23:04:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71026 00:22:32.372 [2024-12-09 23:04:07.502359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:32.633 [2024-12-09 23:04:07.737941] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UX6BCA5KLa 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:33.580 ************************************ 00:22:33.580 END TEST raid_read_error_test 00:22:33.580 ************************************ 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:22:33.580 00:22:33.580 real 0m4.096s 00:22:33.580 user 0m4.761s 00:22:33.580 sys 0m0.540s 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.580 23:04:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.580 23:04:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:22:33.580 23:04:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:33.580 23:04:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.580 23:04:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.580 ************************************ 00:22:33.580 START TEST raid_write_error_test 00:22:33.580 ************************************ 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wKkju8jxsU 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71166 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71166 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71166 ']' 00:22:33.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.580 23:04:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.580 [2024-12-09 23:04:08.788310] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:33.580 [2024-12-09 23:04:08.788469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71166 ] 00:22:33.842 [2024-12-09 23:04:08.952609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.842 [2024-12-09 23:04:09.092857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.103 [2024-12-09 23:04:09.259980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.103 [2024-12-09 23:04:09.260073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.364 BaseBdev1_malloc 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.364 true 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.364 [2024-12-09 23:04:09.716474] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:34.364 [2024-12-09 23:04:09.716579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.364 [2024-12-09 23:04:09.716611] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:34.364 [2024-12-09 23:04:09.716627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.364 [2024-12-09 23:04:09.719850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.364 [2024-12-09 23:04:09.719921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:34.364 BaseBdev1 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.364 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:34.365 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.365 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 BaseBdev2_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 true 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 [2024-12-09 23:04:09.774799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:34.628 [2024-12-09 23:04:09.774884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.628 [2024-12-09 23:04:09.774906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:34.628 [2024-12-09 23:04:09.774918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.628 [2024-12-09 23:04:09.777598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.628 [2024-12-09 23:04:09.777666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:34.628 BaseBdev2 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 BaseBdev3_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 true 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 [2024-12-09 23:04:09.838894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:34.628 [2024-12-09 23:04:09.838978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.628 [2024-12-09 23:04:09.839004] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:34.628 [2024-12-09 23:04:09.839017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.628 [2024-12-09 23:04:09.841722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.628 [2024-12-09 23:04:09.841785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:34.628 BaseBdev3 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 BaseBdev4_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 true 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 [2024-12-09 23:04:09.893310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:34.628 [2024-12-09 23:04:09.893393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.628 [2024-12-09 23:04:09.893418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:34.628 [2024-12-09 23:04:09.893430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.628 [2024-12-09 23:04:09.896085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.628 [2024-12-09 23:04:09.896164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:34.628 BaseBdev4 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.628 [2024-12-09 23:04:09.905424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.628 [2024-12-09 23:04:09.907684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.628 [2024-12-09 23:04:09.907800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:34.628 [2024-12-09 23:04:09.907882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:34.628 [2024-12-09 23:04:09.908166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:34.628 [2024-12-09 23:04:09.908185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:34.628 [2024-12-09 23:04:09.908513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:34.628 [2024-12-09 23:04:09.908714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:34.628 [2024-12-09 23:04:09.908727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:34.628 [2024-12-09 23:04:09.908923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.628 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.629 "name": "raid_bdev1", 00:22:34.629 "uuid": "9adad9bf-4dcc-42ee-a2d4-6fc12a51fe57", 00:22:34.629 "strip_size_kb": 64, 00:22:34.629 "state": "online", 00:22:34.629 "raid_level": "concat", 00:22:34.629 "superblock": true, 00:22:34.629 "num_base_bdevs": 4, 00:22:34.629 "num_base_bdevs_discovered": 4, 00:22:34.629 "num_base_bdevs_operational": 4, 00:22:34.629 "base_bdevs_list": [ 00:22:34.629 { 00:22:34.629 "name": "BaseBdev1", 00:22:34.629 "uuid": "fc813591-7c70-54e5-9548-fac58e8d23be", 00:22:34.629 "is_configured": true, 00:22:34.629 "data_offset": 2048, 00:22:34.629 "data_size": 63488 00:22:34.629 }, 00:22:34.629 { 00:22:34.629 "name": "BaseBdev2", 00:22:34.629 "uuid": "9eebc95d-b721-517f-9ad5-f39b54d1d4bd", 00:22:34.629 "is_configured": true, 00:22:34.629 "data_offset": 2048, 00:22:34.629 "data_size": 63488 00:22:34.629 }, 00:22:34.629 { 00:22:34.629 "name": "BaseBdev3", 00:22:34.629 "uuid": "501dde33-a71d-573d-b5e1-ead48ec03f8c", 00:22:34.629 "is_configured": true, 00:22:34.629 "data_offset": 2048, 00:22:34.629 "data_size": 63488 00:22:34.629 }, 00:22:34.629 { 00:22:34.629 "name": "BaseBdev4", 00:22:34.629 "uuid": "f771739e-abd3-5220-9041-cd29f1332506", 00:22:34.629 "is_configured": true, 00:22:34.629 "data_offset": 2048, 00:22:34.629 "data_size": 63488 00:22:34.629 } 00:22:34.629 ] 00:22:34.629 }' 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.629 23:04:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.203 23:04:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:35.203 23:04:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:35.203 [2024-12-09 23:04:10.382672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.160 "name": "raid_bdev1", 00:22:36.160 "uuid": "9adad9bf-4dcc-42ee-a2d4-6fc12a51fe57", 00:22:36.160 "strip_size_kb": 64, 00:22:36.160 "state": "online", 00:22:36.160 "raid_level": "concat", 00:22:36.160 "superblock": true, 00:22:36.160 "num_base_bdevs": 4, 00:22:36.160 "num_base_bdevs_discovered": 4, 00:22:36.160 "num_base_bdevs_operational": 4, 00:22:36.160 "base_bdevs_list": [ 00:22:36.160 { 00:22:36.160 "name": "BaseBdev1", 00:22:36.160 "uuid": "fc813591-7c70-54e5-9548-fac58e8d23be", 00:22:36.160 "is_configured": true, 00:22:36.160 "data_offset": 2048, 00:22:36.160 "data_size": 63488 00:22:36.160 }, 00:22:36.160 { 00:22:36.160 "name": "BaseBdev2", 00:22:36.160 "uuid": "9eebc95d-b721-517f-9ad5-f39b54d1d4bd", 00:22:36.160 "is_configured": true, 00:22:36.160 "data_offset": 2048, 00:22:36.160 "data_size": 63488 00:22:36.160 }, 00:22:36.160 { 00:22:36.160 "name": "BaseBdev3", 00:22:36.160 "uuid": "501dde33-a71d-573d-b5e1-ead48ec03f8c", 00:22:36.160 "is_configured": true, 00:22:36.160 "data_offset": 2048, 00:22:36.160 "data_size": 63488 00:22:36.160 }, 00:22:36.160 { 00:22:36.160 "name": "BaseBdev4", 00:22:36.160 "uuid": "f771739e-abd3-5220-9041-cd29f1332506", 00:22:36.160 "is_configured": true, 00:22:36.160 "data_offset": 2048, 00:22:36.160 "data_size": 63488 00:22:36.160 } 00:22:36.160 ] 00:22:36.160 }' 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.160 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.434 [2024-12-09 23:04:11.630189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.434 [2024-12-09 23:04:11.630234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.434 [2024-12-09 23:04:11.633484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.434 { 00:22:36.434 "results": [ 00:22:36.434 { 00:22:36.434 "job": "raid_bdev1", 00:22:36.434 "core_mask": "0x1", 00:22:36.434 "workload": "randrw", 00:22:36.434 "percentage": 50, 00:22:36.434 "status": "finished", 00:22:36.434 "queue_depth": 1, 00:22:36.434 "io_size": 131072, 00:22:36.434 "runtime": 1.245273, 00:22:36.434 "iops": 11909.838244304663, 00:22:36.434 "mibps": 1488.7297805380829, 00:22:36.434 "io_failed": 1, 00:22:36.434 "io_timeout": 0, 00:22:36.434 "avg_latency_us": 116.44742510994939, 00:22:36.434 "min_latency_us": 34.26461538461538, 00:22:36.434 "max_latency_us": 1714.0184615384615 00:22:36.434 } 00:22:36.434 ], 00:22:36.434 "core_count": 1 00:22:36.434 } 00:22:36.434 [2024-12-09 23:04:11.633752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.434 [2024-12-09 23:04:11.633821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.434 [2024-12-09 23:04:11.633834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71166 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71166 ']' 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71166 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71166 00:22:36.434 killing process with pid 71166 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71166' 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71166 00:22:36.434 23:04:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71166 00:22:36.434 [2024-12-09 23:04:11.664240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.695 [2024-12-09 23:04:11.899314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wKkju8jxsU 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:22:37.641 00:22:37.641 real 0m4.087s 00:22:37.641 user 0m4.737s 00:22:37.641 sys 0m0.550s 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.641 ************************************ 00:22:37.641 END TEST raid_write_error_test 00:22:37.641 ************************************ 00:22:37.641 23:04:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.641 23:04:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:37.641 23:04:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:22:37.641 23:04:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:37.641 23:04:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.641 23:04:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.641 ************************************ 00:22:37.641 START TEST raid_state_function_test 00:22:37.641 ************************************ 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:37.641 Process raid pid: 71304 00:22:37.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71304 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71304' 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71304 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71304 ']' 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.641 23:04:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.641 [2024-12-09 23:04:12.976415] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:37.641 [2024-12-09 23:04:12.976689] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.909 [2024-12-09 23:04:13.162678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.171 [2024-12-09 23:04:13.309948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.171 [2024-12-09 23:04:13.483237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.171 [2024-12-09 23:04:13.483298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.741 [2024-12-09 23:04:13.847148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:38.741 [2024-12-09 23:04:13.847237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:38.741 [2024-12-09 23:04:13.847257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:38.741 [2024-12-09 23:04:13.847268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:38.741 [2024-12-09 23:04:13.847276] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:38.741 [2024-12-09 23:04:13.847285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:38.741 [2024-12-09 23:04:13.847292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:38.741 [2024-12-09 23:04:13.847301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.741 "name": "Existed_Raid", 00:22:38.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.741 "strip_size_kb": 0, 00:22:38.741 "state": "configuring", 00:22:38.741 "raid_level": "raid1", 00:22:38.741 "superblock": false, 00:22:38.741 "num_base_bdevs": 4, 00:22:38.741 "num_base_bdevs_discovered": 0, 00:22:38.741 "num_base_bdevs_operational": 4, 00:22:38.741 "base_bdevs_list": [ 00:22:38.741 { 00:22:38.741 "name": "BaseBdev1", 00:22:38.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.741 "is_configured": false, 00:22:38.741 "data_offset": 0, 00:22:38.741 "data_size": 0 00:22:38.741 }, 00:22:38.741 { 00:22:38.741 "name": "BaseBdev2", 00:22:38.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.741 "is_configured": false, 00:22:38.741 "data_offset": 0, 00:22:38.741 "data_size": 0 00:22:38.741 }, 00:22:38.741 { 00:22:38.741 "name": "BaseBdev3", 00:22:38.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.741 "is_configured": false, 00:22:38.741 "data_offset": 0, 00:22:38.741 "data_size": 0 00:22:38.741 }, 00:22:38.741 { 00:22:38.741 "name": "BaseBdev4", 00:22:38.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.741 "is_configured": false, 00:22:38.741 "data_offset": 0, 00:22:38.741 "data_size": 0 00:22:38.741 } 00:22:38.741 ] 00:22:38.741 }' 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.741 23:04:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.001 [2024-12-09 23:04:14.215199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:39.001 [2024-12-09 23:04:14.215255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.001 [2024-12-09 23:04:14.227216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:39.001 [2024-12-09 23:04:14.227473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:39.001 [2024-12-09 23:04:14.227564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:39.001 [2024-12-09 23:04:14.227601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:39.001 [2024-12-09 23:04:14.227621] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:39.001 [2024-12-09 23:04:14.227643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:39.001 [2024-12-09 23:04:14.227661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:39.001 [2024-12-09 23:04:14.227682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.001 [2024-12-09 23:04:14.266728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:39.001 BaseBdev1 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.001 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.001 [ 00:22:39.001 { 00:22:39.001 "name": "BaseBdev1", 00:22:39.001 "aliases": [ 00:22:39.001 "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b" 00:22:39.001 ], 00:22:39.001 "product_name": "Malloc disk", 00:22:39.001 "block_size": 512, 00:22:39.001 "num_blocks": 65536, 00:22:39.001 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:39.001 "assigned_rate_limits": { 00:22:39.001 "rw_ios_per_sec": 0, 00:22:39.001 "rw_mbytes_per_sec": 0, 00:22:39.001 "r_mbytes_per_sec": 0, 00:22:39.001 "w_mbytes_per_sec": 0 00:22:39.001 }, 00:22:39.001 "claimed": true, 00:22:39.001 "claim_type": "exclusive_write", 00:22:39.001 "zoned": false, 00:22:39.001 "supported_io_types": { 00:22:39.002 "read": true, 00:22:39.002 "write": true, 00:22:39.002 "unmap": true, 00:22:39.002 "flush": true, 00:22:39.002 "reset": true, 00:22:39.002 "nvme_admin": false, 00:22:39.002 "nvme_io": false, 00:22:39.002 "nvme_io_md": false, 00:22:39.002 "write_zeroes": true, 00:22:39.002 "zcopy": true, 00:22:39.002 "get_zone_info": false, 00:22:39.002 "zone_management": false, 00:22:39.002 "zone_append": false, 00:22:39.002 "compare": false, 00:22:39.002 "compare_and_write": false, 00:22:39.002 "abort": true, 00:22:39.002 "seek_hole": false, 00:22:39.002 "seek_data": false, 00:22:39.002 "copy": true, 00:22:39.002 "nvme_iov_md": false 00:22:39.002 }, 00:22:39.002 "memory_domains": [ 00:22:39.002 { 00:22:39.002 "dma_device_id": "system", 00:22:39.002 "dma_device_type": 1 00:22:39.002 }, 00:22:39.002 { 00:22:39.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.002 "dma_device_type": 2 00:22:39.002 } 00:22:39.002 ], 00:22:39.002 "driver_specific": {} 00:22:39.002 } 00:22:39.002 ] 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.002 "name": "Existed_Raid", 00:22:39.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.002 "strip_size_kb": 0, 00:22:39.002 "state": "configuring", 00:22:39.002 "raid_level": "raid1", 00:22:39.002 "superblock": false, 00:22:39.002 "num_base_bdevs": 4, 00:22:39.002 "num_base_bdevs_discovered": 1, 00:22:39.002 "num_base_bdevs_operational": 4, 00:22:39.002 "base_bdevs_list": [ 00:22:39.002 { 00:22:39.002 "name": "BaseBdev1", 00:22:39.002 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:39.002 "is_configured": true, 00:22:39.002 "data_offset": 0, 00:22:39.002 "data_size": 65536 00:22:39.002 }, 00:22:39.002 { 00:22:39.002 "name": "BaseBdev2", 00:22:39.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.002 "is_configured": false, 00:22:39.002 "data_offset": 0, 00:22:39.002 "data_size": 0 00:22:39.002 }, 00:22:39.002 { 00:22:39.002 "name": "BaseBdev3", 00:22:39.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.002 "is_configured": false, 00:22:39.002 "data_offset": 0, 00:22:39.002 "data_size": 0 00:22:39.002 }, 00:22:39.002 { 00:22:39.002 "name": "BaseBdev4", 00:22:39.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.002 "is_configured": false, 00:22:39.002 "data_offset": 0, 00:22:39.002 "data_size": 0 00:22:39.002 } 00:22:39.002 ] 00:22:39.002 }' 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.002 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.573 [2024-12-09 23:04:14.670901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:39.573 [2024-12-09 23:04:14.671197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.573 [2024-12-09 23:04:14.678958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:39.573 [2024-12-09 23:04:14.681357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:39.573 [2024-12-09 23:04:14.681555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:39.573 [2024-12-09 23:04:14.681655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:39.573 [2024-12-09 23:04:14.681707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:39.573 [2024-12-09 23:04:14.681738] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:39.573 [2024-12-09 23:04:14.681773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.573 "name": "Existed_Raid", 00:22:39.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.573 "strip_size_kb": 0, 00:22:39.573 "state": "configuring", 00:22:39.573 "raid_level": "raid1", 00:22:39.573 "superblock": false, 00:22:39.573 "num_base_bdevs": 4, 00:22:39.573 "num_base_bdevs_discovered": 1, 00:22:39.573 "num_base_bdevs_operational": 4, 00:22:39.573 "base_bdevs_list": [ 00:22:39.573 { 00:22:39.573 "name": "BaseBdev1", 00:22:39.573 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:39.573 "is_configured": true, 00:22:39.573 "data_offset": 0, 00:22:39.573 "data_size": 65536 00:22:39.573 }, 00:22:39.573 { 00:22:39.573 "name": "BaseBdev2", 00:22:39.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.573 "is_configured": false, 00:22:39.573 "data_offset": 0, 00:22:39.573 "data_size": 0 00:22:39.573 }, 00:22:39.573 { 00:22:39.573 "name": "BaseBdev3", 00:22:39.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.573 "is_configured": false, 00:22:39.573 "data_offset": 0, 00:22:39.573 "data_size": 0 00:22:39.573 }, 00:22:39.573 { 00:22:39.573 "name": "BaseBdev4", 00:22:39.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.573 "is_configured": false, 00:22:39.573 "data_offset": 0, 00:22:39.573 "data_size": 0 00:22:39.573 } 00:22:39.573 ] 00:22:39.573 }' 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.573 23:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.869 [2024-12-09 23:04:15.099033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.869 BaseBdev2 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.869 [ 00:22:39.869 { 00:22:39.869 "name": "BaseBdev2", 00:22:39.869 "aliases": [ 00:22:39.869 "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b" 00:22:39.869 ], 00:22:39.869 "product_name": "Malloc disk", 00:22:39.869 "block_size": 512, 00:22:39.869 "num_blocks": 65536, 00:22:39.869 "uuid": "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b", 00:22:39.869 "assigned_rate_limits": { 00:22:39.869 "rw_ios_per_sec": 0, 00:22:39.869 "rw_mbytes_per_sec": 0, 00:22:39.869 "r_mbytes_per_sec": 0, 00:22:39.869 "w_mbytes_per_sec": 0 00:22:39.869 }, 00:22:39.869 "claimed": true, 00:22:39.869 "claim_type": "exclusive_write", 00:22:39.869 "zoned": false, 00:22:39.869 "supported_io_types": { 00:22:39.869 "read": true, 00:22:39.869 "write": true, 00:22:39.869 "unmap": true, 00:22:39.869 "flush": true, 00:22:39.869 "reset": true, 00:22:39.869 "nvme_admin": false, 00:22:39.869 "nvme_io": false, 00:22:39.869 "nvme_io_md": false, 00:22:39.869 "write_zeroes": true, 00:22:39.869 "zcopy": true, 00:22:39.869 "get_zone_info": false, 00:22:39.869 "zone_management": false, 00:22:39.869 "zone_append": false, 00:22:39.869 "compare": false, 00:22:39.869 "compare_and_write": false, 00:22:39.869 "abort": true, 00:22:39.869 "seek_hole": false, 00:22:39.869 "seek_data": false, 00:22:39.869 "copy": true, 00:22:39.869 "nvme_iov_md": false 00:22:39.869 }, 00:22:39.869 "memory_domains": [ 00:22:39.869 { 00:22:39.869 "dma_device_id": "system", 00:22:39.869 "dma_device_type": 1 00:22:39.869 }, 00:22:39.869 { 00:22:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.869 "dma_device_type": 2 00:22:39.869 } 00:22:39.869 ], 00:22:39.869 "driver_specific": {} 00:22:39.869 } 00:22:39.869 ] 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.869 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.869 "name": "Existed_Raid", 00:22:39.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.869 "strip_size_kb": 0, 00:22:39.869 "state": "configuring", 00:22:39.869 "raid_level": "raid1", 00:22:39.869 "superblock": false, 00:22:39.869 "num_base_bdevs": 4, 00:22:39.869 "num_base_bdevs_discovered": 2, 00:22:39.869 "num_base_bdevs_operational": 4, 00:22:39.869 "base_bdevs_list": [ 00:22:39.869 { 00:22:39.869 "name": "BaseBdev1", 00:22:39.869 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:39.869 "is_configured": true, 00:22:39.869 "data_offset": 0, 00:22:39.869 "data_size": 65536 00:22:39.869 }, 00:22:39.869 { 00:22:39.869 "name": "BaseBdev2", 00:22:39.869 "uuid": "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b", 00:22:39.869 "is_configured": true, 00:22:39.869 "data_offset": 0, 00:22:39.869 "data_size": 65536 00:22:39.869 }, 00:22:39.869 { 00:22:39.869 "name": "BaseBdev3", 00:22:39.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.869 "is_configured": false, 00:22:39.869 "data_offset": 0, 00:22:39.869 "data_size": 0 00:22:39.869 }, 00:22:39.869 { 00:22:39.869 "name": "BaseBdev4", 00:22:39.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.869 "is_configured": false, 00:22:39.869 "data_offset": 0, 00:22:39.869 "data_size": 0 00:22:39.869 } 00:22:39.870 ] 00:22:39.870 }' 00:22:39.870 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.870 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.129 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:40.129 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.129 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 [2024-12-09 23:04:15.513333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.390 BaseBdev3 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 [ 00:22:40.390 { 00:22:40.390 "name": "BaseBdev3", 00:22:40.390 "aliases": [ 00:22:40.390 "19c91431-470d-4926-b18e-4402550a5386" 00:22:40.390 ], 00:22:40.390 "product_name": "Malloc disk", 00:22:40.390 "block_size": 512, 00:22:40.390 "num_blocks": 65536, 00:22:40.390 "uuid": "19c91431-470d-4926-b18e-4402550a5386", 00:22:40.390 "assigned_rate_limits": { 00:22:40.390 "rw_ios_per_sec": 0, 00:22:40.390 "rw_mbytes_per_sec": 0, 00:22:40.390 "r_mbytes_per_sec": 0, 00:22:40.390 "w_mbytes_per_sec": 0 00:22:40.390 }, 00:22:40.390 "claimed": true, 00:22:40.390 "claim_type": "exclusive_write", 00:22:40.390 "zoned": false, 00:22:40.390 "supported_io_types": { 00:22:40.390 "read": true, 00:22:40.390 "write": true, 00:22:40.390 "unmap": true, 00:22:40.390 "flush": true, 00:22:40.390 "reset": true, 00:22:40.390 "nvme_admin": false, 00:22:40.390 "nvme_io": false, 00:22:40.390 "nvme_io_md": false, 00:22:40.390 "write_zeroes": true, 00:22:40.390 "zcopy": true, 00:22:40.390 "get_zone_info": false, 00:22:40.390 "zone_management": false, 00:22:40.390 "zone_append": false, 00:22:40.390 "compare": false, 00:22:40.390 "compare_and_write": false, 00:22:40.390 "abort": true, 00:22:40.390 "seek_hole": false, 00:22:40.390 "seek_data": false, 00:22:40.390 "copy": true, 00:22:40.390 "nvme_iov_md": false 00:22:40.390 }, 00:22:40.390 "memory_domains": [ 00:22:40.390 { 00:22:40.390 "dma_device_id": "system", 00:22:40.390 "dma_device_type": 1 00:22:40.390 }, 00:22:40.390 { 00:22:40.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.390 "dma_device_type": 2 00:22:40.390 } 00:22:40.390 ], 00:22:40.390 "driver_specific": {} 00:22:40.390 } 00:22:40.390 ] 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.390 "name": "Existed_Raid", 00:22:40.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.390 "strip_size_kb": 0, 00:22:40.390 "state": "configuring", 00:22:40.390 "raid_level": "raid1", 00:22:40.390 "superblock": false, 00:22:40.390 "num_base_bdevs": 4, 00:22:40.390 "num_base_bdevs_discovered": 3, 00:22:40.390 "num_base_bdevs_operational": 4, 00:22:40.390 "base_bdevs_list": [ 00:22:40.390 { 00:22:40.390 "name": "BaseBdev1", 00:22:40.390 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:40.390 "is_configured": true, 00:22:40.390 "data_offset": 0, 00:22:40.390 "data_size": 65536 00:22:40.390 }, 00:22:40.390 { 00:22:40.390 "name": "BaseBdev2", 00:22:40.390 "uuid": "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b", 00:22:40.390 "is_configured": true, 00:22:40.390 "data_offset": 0, 00:22:40.390 "data_size": 65536 00:22:40.390 }, 00:22:40.390 { 00:22:40.390 "name": "BaseBdev3", 00:22:40.390 "uuid": "19c91431-470d-4926-b18e-4402550a5386", 00:22:40.390 "is_configured": true, 00:22:40.390 "data_offset": 0, 00:22:40.390 "data_size": 65536 00:22:40.390 }, 00:22:40.390 { 00:22:40.390 "name": "BaseBdev4", 00:22:40.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.390 "is_configured": false, 00:22:40.390 "data_offset": 0, 00:22:40.390 "data_size": 0 00:22:40.390 } 00:22:40.390 ] 00:22:40.390 }' 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.390 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.652 [2024-12-09 23:04:15.911092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:40.652 [2024-12-09 23:04:15.911206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:40.652 [2024-12-09 23:04:15.911216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:40.652 [2024-12-09 23:04:15.911537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:40.652 [2024-12-09 23:04:15.911725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:40.652 [2024-12-09 23:04:15.911739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:40.652 [2024-12-09 23:04:15.912058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.652 BaseBdev4 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.652 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.652 [ 00:22:40.652 { 00:22:40.652 "name": "BaseBdev4", 00:22:40.652 "aliases": [ 00:22:40.652 "c116ca16-6e4d-4ab0-97d1-ea1060874d46" 00:22:40.652 ], 00:22:40.652 "product_name": "Malloc disk", 00:22:40.652 "block_size": 512, 00:22:40.652 "num_blocks": 65536, 00:22:40.652 "uuid": "c116ca16-6e4d-4ab0-97d1-ea1060874d46", 00:22:40.652 "assigned_rate_limits": { 00:22:40.652 "rw_ios_per_sec": 0, 00:22:40.652 "rw_mbytes_per_sec": 0, 00:22:40.652 "r_mbytes_per_sec": 0, 00:22:40.652 "w_mbytes_per_sec": 0 00:22:40.652 }, 00:22:40.652 "claimed": true, 00:22:40.652 "claim_type": "exclusive_write", 00:22:40.652 "zoned": false, 00:22:40.652 "supported_io_types": { 00:22:40.652 "read": true, 00:22:40.652 "write": true, 00:22:40.652 "unmap": true, 00:22:40.652 "flush": true, 00:22:40.652 "reset": true, 00:22:40.652 "nvme_admin": false, 00:22:40.652 "nvme_io": false, 00:22:40.652 "nvme_io_md": false, 00:22:40.652 "write_zeroes": true, 00:22:40.652 "zcopy": true, 00:22:40.652 "get_zone_info": false, 00:22:40.652 "zone_management": false, 00:22:40.652 "zone_append": false, 00:22:40.652 "compare": false, 00:22:40.653 "compare_and_write": false, 00:22:40.653 "abort": true, 00:22:40.653 "seek_hole": false, 00:22:40.653 "seek_data": false, 00:22:40.653 "copy": true, 00:22:40.653 "nvme_iov_md": false 00:22:40.653 }, 00:22:40.653 "memory_domains": [ 00:22:40.653 { 00:22:40.653 "dma_device_id": "system", 00:22:40.653 "dma_device_type": 1 00:22:40.653 }, 00:22:40.653 { 00:22:40.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.653 "dma_device_type": 2 00:22:40.653 } 00:22:40.653 ], 00:22:40.653 "driver_specific": {} 00:22:40.653 } 00:22:40.653 ] 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.653 "name": "Existed_Raid", 00:22:40.653 "uuid": "0b8af6ac-f3fc-4a77-97eb-0d748f57e908", 00:22:40.653 "strip_size_kb": 0, 00:22:40.653 "state": "online", 00:22:40.653 "raid_level": "raid1", 00:22:40.653 "superblock": false, 00:22:40.653 "num_base_bdevs": 4, 00:22:40.653 "num_base_bdevs_discovered": 4, 00:22:40.653 "num_base_bdevs_operational": 4, 00:22:40.653 "base_bdevs_list": [ 00:22:40.653 { 00:22:40.653 "name": "BaseBdev1", 00:22:40.653 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:40.653 "is_configured": true, 00:22:40.653 "data_offset": 0, 00:22:40.653 "data_size": 65536 00:22:40.653 }, 00:22:40.653 { 00:22:40.653 "name": "BaseBdev2", 00:22:40.653 "uuid": "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b", 00:22:40.653 "is_configured": true, 00:22:40.653 "data_offset": 0, 00:22:40.653 "data_size": 65536 00:22:40.653 }, 00:22:40.653 { 00:22:40.653 "name": "BaseBdev3", 00:22:40.653 "uuid": "19c91431-470d-4926-b18e-4402550a5386", 00:22:40.653 "is_configured": true, 00:22:40.653 "data_offset": 0, 00:22:40.653 "data_size": 65536 00:22:40.653 }, 00:22:40.653 { 00:22:40.653 "name": "BaseBdev4", 00:22:40.653 "uuid": "c116ca16-6e4d-4ab0-97d1-ea1060874d46", 00:22:40.653 "is_configured": true, 00:22:40.653 "data_offset": 0, 00:22:40.653 "data_size": 65536 00:22:40.653 } 00:22:40.653 ] 00:22:40.653 }' 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.653 23:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.230 [2024-12-09 23:04:16.399712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.230 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.230 "name": "Existed_Raid", 00:22:41.230 "aliases": [ 00:22:41.230 "0b8af6ac-f3fc-4a77-97eb-0d748f57e908" 00:22:41.230 ], 00:22:41.230 "product_name": "Raid Volume", 00:22:41.230 "block_size": 512, 00:22:41.230 "num_blocks": 65536, 00:22:41.230 "uuid": "0b8af6ac-f3fc-4a77-97eb-0d748f57e908", 00:22:41.230 "assigned_rate_limits": { 00:22:41.230 "rw_ios_per_sec": 0, 00:22:41.230 "rw_mbytes_per_sec": 0, 00:22:41.230 "r_mbytes_per_sec": 0, 00:22:41.230 "w_mbytes_per_sec": 0 00:22:41.230 }, 00:22:41.230 "claimed": false, 00:22:41.230 "zoned": false, 00:22:41.230 "supported_io_types": { 00:22:41.230 "read": true, 00:22:41.230 "write": true, 00:22:41.230 "unmap": false, 00:22:41.230 "flush": false, 00:22:41.230 "reset": true, 00:22:41.230 "nvme_admin": false, 00:22:41.230 "nvme_io": false, 00:22:41.230 "nvme_io_md": false, 00:22:41.230 "write_zeroes": true, 00:22:41.230 "zcopy": false, 00:22:41.230 "get_zone_info": false, 00:22:41.230 "zone_management": false, 00:22:41.230 "zone_append": false, 00:22:41.230 "compare": false, 00:22:41.230 "compare_and_write": false, 00:22:41.230 "abort": false, 00:22:41.230 "seek_hole": false, 00:22:41.230 "seek_data": false, 00:22:41.230 "copy": false, 00:22:41.230 "nvme_iov_md": false 00:22:41.230 }, 00:22:41.230 "memory_domains": [ 00:22:41.230 { 00:22:41.230 "dma_device_id": "system", 00:22:41.230 "dma_device_type": 1 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.230 "dma_device_type": 2 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "system", 00:22:41.230 "dma_device_type": 1 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.230 "dma_device_type": 2 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "system", 00:22:41.230 "dma_device_type": 1 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.230 "dma_device_type": 2 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "system", 00:22:41.230 "dma_device_type": 1 00:22:41.230 }, 00:22:41.230 { 00:22:41.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.230 "dma_device_type": 2 00:22:41.231 } 00:22:41.231 ], 00:22:41.231 "driver_specific": { 00:22:41.231 "raid": { 00:22:41.231 "uuid": "0b8af6ac-f3fc-4a77-97eb-0d748f57e908", 00:22:41.231 "strip_size_kb": 0, 00:22:41.231 "state": "online", 00:22:41.231 "raid_level": "raid1", 00:22:41.231 "superblock": false, 00:22:41.231 "num_base_bdevs": 4, 00:22:41.231 "num_base_bdevs_discovered": 4, 00:22:41.231 "num_base_bdevs_operational": 4, 00:22:41.231 "base_bdevs_list": [ 00:22:41.231 { 00:22:41.231 "name": "BaseBdev1", 00:22:41.231 "uuid": "76ffe2cd-ec9b-4b84-8425-bcaa0d1ffc9b", 00:22:41.231 "is_configured": true, 00:22:41.231 "data_offset": 0, 00:22:41.231 "data_size": 65536 00:22:41.231 }, 00:22:41.231 { 00:22:41.231 "name": "BaseBdev2", 00:22:41.231 "uuid": "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b", 00:22:41.231 "is_configured": true, 00:22:41.231 "data_offset": 0, 00:22:41.231 "data_size": 65536 00:22:41.231 }, 00:22:41.231 { 00:22:41.231 "name": "BaseBdev3", 00:22:41.231 "uuid": "19c91431-470d-4926-b18e-4402550a5386", 00:22:41.231 "is_configured": true, 00:22:41.231 "data_offset": 0, 00:22:41.231 "data_size": 65536 00:22:41.231 }, 00:22:41.231 { 00:22:41.231 "name": "BaseBdev4", 00:22:41.231 "uuid": "c116ca16-6e4d-4ab0-97d1-ea1060874d46", 00:22:41.231 "is_configured": true, 00:22:41.231 "data_offset": 0, 00:22:41.231 "data_size": 65536 00:22:41.231 } 00:22:41.231 ] 00:22:41.231 } 00:22:41.231 } 00:22:41.231 }' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:41.231 BaseBdev2 00:22:41.231 BaseBdev3 00:22:41.231 BaseBdev4' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.231 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.494 [2024-12-09 23:04:16.699451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.494 "name": "Existed_Raid", 00:22:41.494 "uuid": "0b8af6ac-f3fc-4a77-97eb-0d748f57e908", 00:22:41.494 "strip_size_kb": 0, 00:22:41.494 "state": "online", 00:22:41.494 "raid_level": "raid1", 00:22:41.494 "superblock": false, 00:22:41.494 "num_base_bdevs": 4, 00:22:41.494 "num_base_bdevs_discovered": 3, 00:22:41.494 "num_base_bdevs_operational": 3, 00:22:41.494 "base_bdevs_list": [ 00:22:41.494 { 00:22:41.494 "name": null, 00:22:41.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.494 "is_configured": false, 00:22:41.494 "data_offset": 0, 00:22:41.494 "data_size": 65536 00:22:41.494 }, 00:22:41.494 { 00:22:41.494 "name": "BaseBdev2", 00:22:41.494 "uuid": "0f899a6d-9f95-4881-a0cc-4993a2ed9c2b", 00:22:41.494 "is_configured": true, 00:22:41.494 "data_offset": 0, 00:22:41.494 "data_size": 65536 00:22:41.494 }, 00:22:41.494 { 00:22:41.494 "name": "BaseBdev3", 00:22:41.494 "uuid": "19c91431-470d-4926-b18e-4402550a5386", 00:22:41.494 "is_configured": true, 00:22:41.494 "data_offset": 0, 00:22:41.494 "data_size": 65536 00:22:41.494 }, 00:22:41.494 { 00:22:41.494 "name": "BaseBdev4", 00:22:41.494 "uuid": "c116ca16-6e4d-4ab0-97d1-ea1060874d46", 00:22:41.494 "is_configured": true, 00:22:41.494 "data_offset": 0, 00:22:41.494 "data_size": 65536 00:22:41.494 } 00:22:41.494 ] 00:22:41.494 }' 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.494 23:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.754 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:41.754 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:41.754 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.754 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:41.754 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.754 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 [2024-12-09 23:04:17.145092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 [2024-12-09 23:04:17.247521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.015 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.015 [2024-12-09 23:04:17.356040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:42.015 [2024-12-09 23:04:17.356316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.277 [2024-12-09 23:04:17.423378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.277 [2024-12-09 23:04:17.423442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.277 [2024-12-09 23:04:17.423455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 BaseBdev2 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 [ 00:22:42.277 { 00:22:42.277 "name": "BaseBdev2", 00:22:42.277 "aliases": [ 00:22:42.277 "3dc0661a-57f2-4cbc-992a-7806bceb01cd" 00:22:42.277 ], 00:22:42.277 "product_name": "Malloc disk", 00:22:42.277 "block_size": 512, 00:22:42.277 "num_blocks": 65536, 00:22:42.277 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:42.277 "assigned_rate_limits": { 00:22:42.277 "rw_ios_per_sec": 0, 00:22:42.277 "rw_mbytes_per_sec": 0, 00:22:42.277 "r_mbytes_per_sec": 0, 00:22:42.277 "w_mbytes_per_sec": 0 00:22:42.277 }, 00:22:42.277 "claimed": false, 00:22:42.277 "zoned": false, 00:22:42.277 "supported_io_types": { 00:22:42.277 "read": true, 00:22:42.277 "write": true, 00:22:42.277 "unmap": true, 00:22:42.277 "flush": true, 00:22:42.277 "reset": true, 00:22:42.277 "nvme_admin": false, 00:22:42.277 "nvme_io": false, 00:22:42.277 "nvme_io_md": false, 00:22:42.277 "write_zeroes": true, 00:22:42.277 "zcopy": true, 00:22:42.277 "get_zone_info": false, 00:22:42.277 "zone_management": false, 00:22:42.277 "zone_append": false, 00:22:42.277 "compare": false, 00:22:42.277 "compare_and_write": false, 00:22:42.277 "abort": true, 00:22:42.277 "seek_hole": false, 00:22:42.277 "seek_data": false, 00:22:42.277 "copy": true, 00:22:42.277 "nvme_iov_md": false 00:22:42.277 }, 00:22:42.277 "memory_domains": [ 00:22:42.277 { 00:22:42.277 "dma_device_id": "system", 00:22:42.277 "dma_device_type": 1 00:22:42.277 }, 00:22:42.277 { 00:22:42.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.277 "dma_device_type": 2 00:22:42.277 } 00:22:42.277 ], 00:22:42.277 "driver_specific": {} 00:22:42.277 } 00:22:42.277 ] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 BaseBdev3 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 [ 00:22:42.277 { 00:22:42.277 "name": "BaseBdev3", 00:22:42.277 "aliases": [ 00:22:42.277 "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6" 00:22:42.277 ], 00:22:42.277 "product_name": "Malloc disk", 00:22:42.277 "block_size": 512, 00:22:42.277 "num_blocks": 65536, 00:22:42.277 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:42.277 "assigned_rate_limits": { 00:22:42.277 "rw_ios_per_sec": 0, 00:22:42.277 "rw_mbytes_per_sec": 0, 00:22:42.277 "r_mbytes_per_sec": 0, 00:22:42.277 "w_mbytes_per_sec": 0 00:22:42.277 }, 00:22:42.277 "claimed": false, 00:22:42.277 "zoned": false, 00:22:42.277 "supported_io_types": { 00:22:42.277 "read": true, 00:22:42.277 "write": true, 00:22:42.277 "unmap": true, 00:22:42.277 "flush": true, 00:22:42.277 "reset": true, 00:22:42.277 "nvme_admin": false, 00:22:42.277 "nvme_io": false, 00:22:42.277 "nvme_io_md": false, 00:22:42.277 "write_zeroes": true, 00:22:42.277 "zcopy": true, 00:22:42.277 "get_zone_info": false, 00:22:42.277 "zone_management": false, 00:22:42.277 "zone_append": false, 00:22:42.277 "compare": false, 00:22:42.277 "compare_and_write": false, 00:22:42.277 "abort": true, 00:22:42.277 "seek_hole": false, 00:22:42.277 "seek_data": false, 00:22:42.277 "copy": true, 00:22:42.277 "nvme_iov_md": false 00:22:42.277 }, 00:22:42.277 "memory_domains": [ 00:22:42.277 { 00:22:42.277 "dma_device_id": "system", 00:22:42.277 "dma_device_type": 1 00:22:42.277 }, 00:22:42.277 { 00:22:42.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.277 "dma_device_type": 2 00:22:42.277 } 00:22:42.277 ], 00:22:42.277 "driver_specific": {} 00:22:42.277 } 00:22:42.277 ] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.277 BaseBdev4 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:42.277 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.278 [ 00:22:42.278 { 00:22:42.278 "name": "BaseBdev4", 00:22:42.278 "aliases": [ 00:22:42.278 "3ae011d9-b1ca-4c05-a9f2-4da29875d543" 00:22:42.278 ], 00:22:42.278 "product_name": "Malloc disk", 00:22:42.278 "block_size": 512, 00:22:42.278 "num_blocks": 65536, 00:22:42.278 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:42.278 "assigned_rate_limits": { 00:22:42.278 "rw_ios_per_sec": 0, 00:22:42.278 "rw_mbytes_per_sec": 0, 00:22:42.278 "r_mbytes_per_sec": 0, 00:22:42.278 "w_mbytes_per_sec": 0 00:22:42.278 }, 00:22:42.278 "claimed": false, 00:22:42.278 "zoned": false, 00:22:42.278 "supported_io_types": { 00:22:42.278 "read": true, 00:22:42.278 "write": true, 00:22:42.278 "unmap": true, 00:22:42.278 "flush": true, 00:22:42.278 "reset": true, 00:22:42.278 "nvme_admin": false, 00:22:42.278 "nvme_io": false, 00:22:42.278 "nvme_io_md": false, 00:22:42.278 "write_zeroes": true, 00:22:42.278 "zcopy": true, 00:22:42.278 "get_zone_info": false, 00:22:42.278 "zone_management": false, 00:22:42.278 "zone_append": false, 00:22:42.278 "compare": false, 00:22:42.278 "compare_and_write": false, 00:22:42.278 "abort": true, 00:22:42.278 "seek_hole": false, 00:22:42.278 "seek_data": false, 00:22:42.278 "copy": true, 00:22:42.278 "nvme_iov_md": false 00:22:42.278 }, 00:22:42.278 "memory_domains": [ 00:22:42.278 { 00:22:42.278 "dma_device_id": "system", 00:22:42.278 "dma_device_type": 1 00:22:42.278 }, 00:22:42.278 { 00:22:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.278 "dma_device_type": 2 00:22:42.278 } 00:22:42.278 ], 00:22:42.278 "driver_specific": {} 00:22:42.278 } 00:22:42.278 ] 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.278 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.543 [2024-12-09 23:04:17.638894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.543 [2024-12-09 23:04:17.639139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.543 [2024-12-09 23:04:17.639284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.543 [2024-12-09 23:04:17.641568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:42.543 [2024-12-09 23:04:17.641772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.543 "name": "Existed_Raid", 00:22:42.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.543 "strip_size_kb": 0, 00:22:42.543 "state": "configuring", 00:22:42.543 "raid_level": "raid1", 00:22:42.543 "superblock": false, 00:22:42.543 "num_base_bdevs": 4, 00:22:42.543 "num_base_bdevs_discovered": 3, 00:22:42.543 "num_base_bdevs_operational": 4, 00:22:42.543 "base_bdevs_list": [ 00:22:42.543 { 00:22:42.543 "name": "BaseBdev1", 00:22:42.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.543 "is_configured": false, 00:22:42.543 "data_offset": 0, 00:22:42.543 "data_size": 0 00:22:42.543 }, 00:22:42.543 { 00:22:42.543 "name": "BaseBdev2", 00:22:42.543 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:42.543 "is_configured": true, 00:22:42.543 "data_offset": 0, 00:22:42.543 "data_size": 65536 00:22:42.543 }, 00:22:42.543 { 00:22:42.543 "name": "BaseBdev3", 00:22:42.543 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:42.543 "is_configured": true, 00:22:42.543 "data_offset": 0, 00:22:42.543 "data_size": 65536 00:22:42.543 }, 00:22:42.543 { 00:22:42.543 "name": "BaseBdev4", 00:22:42.543 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:42.543 "is_configured": true, 00:22:42.543 "data_offset": 0, 00:22:42.543 "data_size": 65536 00:22:42.543 } 00:22:42.543 ] 00:22:42.543 }' 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.543 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.811 [2024-12-09 23:04:17.966997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.811 23:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.812 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.812 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.812 23:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.812 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.812 "name": "Existed_Raid", 00:22:42.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.812 "strip_size_kb": 0, 00:22:42.812 "state": "configuring", 00:22:42.812 "raid_level": "raid1", 00:22:42.812 "superblock": false, 00:22:42.812 "num_base_bdevs": 4, 00:22:42.812 "num_base_bdevs_discovered": 2, 00:22:42.812 "num_base_bdevs_operational": 4, 00:22:42.812 "base_bdevs_list": [ 00:22:42.812 { 00:22:42.812 "name": "BaseBdev1", 00:22:42.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.812 "is_configured": false, 00:22:42.812 "data_offset": 0, 00:22:42.812 "data_size": 0 00:22:42.812 }, 00:22:42.812 { 00:22:42.812 "name": null, 00:22:42.812 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:42.812 "is_configured": false, 00:22:42.812 "data_offset": 0, 00:22:42.812 "data_size": 65536 00:22:42.812 }, 00:22:42.812 { 00:22:42.812 "name": "BaseBdev3", 00:22:42.812 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:42.812 "is_configured": true, 00:22:42.812 "data_offset": 0, 00:22:42.812 "data_size": 65536 00:22:42.812 }, 00:22:42.812 { 00:22:42.812 "name": "BaseBdev4", 00:22:42.812 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:42.812 "is_configured": true, 00:22:42.812 "data_offset": 0, 00:22:42.812 "data_size": 65536 00:22:42.812 } 00:22:42.812 ] 00:22:42.812 }' 00:22:42.812 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.812 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.077 BaseBdev1 00:22:43.077 [2024-12-09 23:04:18.387546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.077 [ 00:22:43.077 { 00:22:43.077 "name": "BaseBdev1", 00:22:43.077 "aliases": [ 00:22:43.077 "7cc571b4-65c3-4cd0-87be-97358af88aa0" 00:22:43.077 ], 00:22:43.077 "product_name": "Malloc disk", 00:22:43.077 "block_size": 512, 00:22:43.077 "num_blocks": 65536, 00:22:43.077 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:43.077 "assigned_rate_limits": { 00:22:43.077 "rw_ios_per_sec": 0, 00:22:43.077 "rw_mbytes_per_sec": 0, 00:22:43.077 "r_mbytes_per_sec": 0, 00:22:43.077 "w_mbytes_per_sec": 0 00:22:43.077 }, 00:22:43.077 "claimed": true, 00:22:43.077 "claim_type": "exclusive_write", 00:22:43.077 "zoned": false, 00:22:43.077 "supported_io_types": { 00:22:43.077 "read": true, 00:22:43.077 "write": true, 00:22:43.077 "unmap": true, 00:22:43.077 "flush": true, 00:22:43.077 "reset": true, 00:22:43.077 "nvme_admin": false, 00:22:43.077 "nvme_io": false, 00:22:43.077 "nvme_io_md": false, 00:22:43.077 "write_zeroes": true, 00:22:43.077 "zcopy": true, 00:22:43.077 "get_zone_info": false, 00:22:43.077 "zone_management": false, 00:22:43.077 "zone_append": false, 00:22:43.077 "compare": false, 00:22:43.077 "compare_and_write": false, 00:22:43.077 "abort": true, 00:22:43.077 "seek_hole": false, 00:22:43.077 "seek_data": false, 00:22:43.077 "copy": true, 00:22:43.077 "nvme_iov_md": false 00:22:43.077 }, 00:22:43.077 "memory_domains": [ 00:22:43.077 { 00:22:43.077 "dma_device_id": "system", 00:22:43.077 "dma_device_type": 1 00:22:43.077 }, 00:22:43.077 { 00:22:43.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.077 "dma_device_type": 2 00:22:43.077 } 00:22:43.077 ], 00:22:43.077 "driver_specific": {} 00:22:43.077 } 00:22:43.077 ] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.077 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.346 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.346 "name": "Existed_Raid", 00:22:43.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.346 "strip_size_kb": 0, 00:22:43.346 "state": "configuring", 00:22:43.346 "raid_level": "raid1", 00:22:43.346 "superblock": false, 00:22:43.346 "num_base_bdevs": 4, 00:22:43.346 "num_base_bdevs_discovered": 3, 00:22:43.346 "num_base_bdevs_operational": 4, 00:22:43.346 "base_bdevs_list": [ 00:22:43.346 { 00:22:43.346 "name": "BaseBdev1", 00:22:43.346 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:43.346 "is_configured": true, 00:22:43.346 "data_offset": 0, 00:22:43.346 "data_size": 65536 00:22:43.346 }, 00:22:43.346 { 00:22:43.346 "name": null, 00:22:43.346 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:43.346 "is_configured": false, 00:22:43.346 "data_offset": 0, 00:22:43.346 "data_size": 65536 00:22:43.346 }, 00:22:43.346 { 00:22:43.346 "name": "BaseBdev3", 00:22:43.346 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:43.346 "is_configured": true, 00:22:43.346 "data_offset": 0, 00:22:43.346 "data_size": 65536 00:22:43.346 }, 00:22:43.346 { 00:22:43.346 "name": "BaseBdev4", 00:22:43.346 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:43.346 "is_configured": true, 00:22:43.346 "data_offset": 0, 00:22:43.346 "data_size": 65536 00:22:43.347 } 00:22:43.347 ] 00:22:43.347 }' 00:22:43.347 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.347 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.608 [2024-12-09 23:04:18.799765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.608 "name": "Existed_Raid", 00:22:43.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.608 "strip_size_kb": 0, 00:22:43.608 "state": "configuring", 00:22:43.608 "raid_level": "raid1", 00:22:43.608 "superblock": false, 00:22:43.608 "num_base_bdevs": 4, 00:22:43.608 "num_base_bdevs_discovered": 2, 00:22:43.608 "num_base_bdevs_operational": 4, 00:22:43.608 "base_bdevs_list": [ 00:22:43.608 { 00:22:43.608 "name": "BaseBdev1", 00:22:43.608 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:43.608 "is_configured": true, 00:22:43.608 "data_offset": 0, 00:22:43.608 "data_size": 65536 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "name": null, 00:22:43.608 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:43.608 "is_configured": false, 00:22:43.608 "data_offset": 0, 00:22:43.608 "data_size": 65536 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "name": null, 00:22:43.608 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:43.608 "is_configured": false, 00:22:43.608 "data_offset": 0, 00:22:43.608 "data_size": 65536 00:22:43.608 }, 00:22:43.608 { 00:22:43.608 "name": "BaseBdev4", 00:22:43.608 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:43.608 "is_configured": true, 00:22:43.608 "data_offset": 0, 00:22:43.608 "data_size": 65536 00:22:43.608 } 00:22:43.608 ] 00:22:43.608 }' 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.608 23:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.869 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.869 [2024-12-09 23:04:19.195858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.870 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.132 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.132 "name": "Existed_Raid", 00:22:44.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.132 "strip_size_kb": 0, 00:22:44.132 "state": "configuring", 00:22:44.132 "raid_level": "raid1", 00:22:44.132 "superblock": false, 00:22:44.132 "num_base_bdevs": 4, 00:22:44.132 "num_base_bdevs_discovered": 3, 00:22:44.132 "num_base_bdevs_operational": 4, 00:22:44.132 "base_bdevs_list": [ 00:22:44.132 { 00:22:44.132 "name": "BaseBdev1", 00:22:44.132 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:44.132 "is_configured": true, 00:22:44.132 "data_offset": 0, 00:22:44.132 "data_size": 65536 00:22:44.132 }, 00:22:44.132 { 00:22:44.132 "name": null, 00:22:44.132 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:44.132 "is_configured": false, 00:22:44.132 "data_offset": 0, 00:22:44.132 "data_size": 65536 00:22:44.132 }, 00:22:44.132 { 00:22:44.132 "name": "BaseBdev3", 00:22:44.132 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:44.132 "is_configured": true, 00:22:44.132 "data_offset": 0, 00:22:44.132 "data_size": 65536 00:22:44.132 }, 00:22:44.132 { 00:22:44.132 "name": "BaseBdev4", 00:22:44.132 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:44.132 "is_configured": true, 00:22:44.132 "data_offset": 0, 00:22:44.132 "data_size": 65536 00:22:44.132 } 00:22:44.132 ] 00:22:44.132 }' 00:22:44.132 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.132 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.393 [2024-12-09 23:04:19.588025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.393 "name": "Existed_Raid", 00:22:44.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.393 "strip_size_kb": 0, 00:22:44.393 "state": "configuring", 00:22:44.393 "raid_level": "raid1", 00:22:44.393 "superblock": false, 00:22:44.393 "num_base_bdevs": 4, 00:22:44.393 "num_base_bdevs_discovered": 2, 00:22:44.393 "num_base_bdevs_operational": 4, 00:22:44.393 "base_bdevs_list": [ 00:22:44.393 { 00:22:44.393 "name": null, 00:22:44.393 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:44.393 "is_configured": false, 00:22:44.393 "data_offset": 0, 00:22:44.393 "data_size": 65536 00:22:44.393 }, 00:22:44.393 { 00:22:44.393 "name": null, 00:22:44.393 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:44.393 "is_configured": false, 00:22:44.393 "data_offset": 0, 00:22:44.393 "data_size": 65536 00:22:44.393 }, 00:22:44.393 { 00:22:44.393 "name": "BaseBdev3", 00:22:44.393 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:44.393 "is_configured": true, 00:22:44.393 "data_offset": 0, 00:22:44.393 "data_size": 65536 00:22:44.393 }, 00:22:44.393 { 00:22:44.393 "name": "BaseBdev4", 00:22:44.393 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:44.393 "is_configured": true, 00:22:44.393 "data_offset": 0, 00:22:44.393 "data_size": 65536 00:22:44.393 } 00:22:44.393 ] 00:22:44.393 }' 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.393 23:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.964 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.965 [2024-12-09 23:04:20.077990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.965 "name": "Existed_Raid", 00:22:44.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.965 "strip_size_kb": 0, 00:22:44.965 "state": "configuring", 00:22:44.965 "raid_level": "raid1", 00:22:44.965 "superblock": false, 00:22:44.965 "num_base_bdevs": 4, 00:22:44.965 "num_base_bdevs_discovered": 3, 00:22:44.965 "num_base_bdevs_operational": 4, 00:22:44.965 "base_bdevs_list": [ 00:22:44.965 { 00:22:44.965 "name": null, 00:22:44.965 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:44.965 "is_configured": false, 00:22:44.965 "data_offset": 0, 00:22:44.965 "data_size": 65536 00:22:44.965 }, 00:22:44.965 { 00:22:44.965 "name": "BaseBdev2", 00:22:44.965 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:44.965 "is_configured": true, 00:22:44.965 "data_offset": 0, 00:22:44.965 "data_size": 65536 00:22:44.965 }, 00:22:44.965 { 00:22:44.965 "name": "BaseBdev3", 00:22:44.965 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:44.965 "is_configured": true, 00:22:44.965 "data_offset": 0, 00:22:44.965 "data_size": 65536 00:22:44.965 }, 00:22:44.965 { 00:22:44.965 "name": "BaseBdev4", 00:22:44.965 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:44.965 "is_configured": true, 00:22:44.965 "data_offset": 0, 00:22:44.965 "data_size": 65536 00:22:44.965 } 00:22:44.965 ] 00:22:44.965 }' 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.965 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7cc571b4-65c3-4cd0-87be-97358af88aa0 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 [2024-12-09 23:04:20.558903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:45.227 [2024-12-09 23:04:20.558985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:45.227 [2024-12-09 23:04:20.558996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:45.227 [2024-12-09 23:04:20.559362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:45.227 [2024-12-09 23:04:20.559539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:45.227 [2024-12-09 23:04:20.559548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:45.227 [2024-12-09 23:04:20.559865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.227 NewBaseBdev 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 [ 00:22:45.227 { 00:22:45.227 "name": "NewBaseBdev", 00:22:45.227 "aliases": [ 00:22:45.227 "7cc571b4-65c3-4cd0-87be-97358af88aa0" 00:22:45.227 ], 00:22:45.227 "product_name": "Malloc disk", 00:22:45.227 "block_size": 512, 00:22:45.227 "num_blocks": 65536, 00:22:45.227 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:45.227 "assigned_rate_limits": { 00:22:45.227 "rw_ios_per_sec": 0, 00:22:45.227 "rw_mbytes_per_sec": 0, 00:22:45.227 "r_mbytes_per_sec": 0, 00:22:45.227 "w_mbytes_per_sec": 0 00:22:45.227 }, 00:22:45.227 "claimed": true, 00:22:45.227 "claim_type": "exclusive_write", 00:22:45.227 "zoned": false, 00:22:45.227 "supported_io_types": { 00:22:45.227 "read": true, 00:22:45.227 "write": true, 00:22:45.227 "unmap": true, 00:22:45.227 "flush": true, 00:22:45.227 "reset": true, 00:22:45.227 "nvme_admin": false, 00:22:45.227 "nvme_io": false, 00:22:45.227 "nvme_io_md": false, 00:22:45.227 "write_zeroes": true, 00:22:45.227 "zcopy": true, 00:22:45.227 "get_zone_info": false, 00:22:45.227 "zone_management": false, 00:22:45.227 "zone_append": false, 00:22:45.227 "compare": false, 00:22:45.227 "compare_and_write": false, 00:22:45.227 "abort": true, 00:22:45.227 "seek_hole": false, 00:22:45.227 "seek_data": false, 00:22:45.227 "copy": true, 00:22:45.227 "nvme_iov_md": false 00:22:45.227 }, 00:22:45.227 "memory_domains": [ 00:22:45.227 { 00:22:45.227 "dma_device_id": "system", 00:22:45.227 "dma_device_type": 1 00:22:45.227 }, 00:22:45.227 { 00:22:45.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.227 "dma_device_type": 2 00:22:45.227 } 00:22:45.227 ], 00:22:45.227 "driver_specific": {} 00:22:45.227 } 00:22:45.227 ] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.227 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.489 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.489 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.489 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.489 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.489 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.489 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.489 "name": "Existed_Raid", 00:22:45.489 "uuid": "58f27e09-d693-4197-bbfb-b1e789198122", 00:22:45.489 "strip_size_kb": 0, 00:22:45.489 "state": "online", 00:22:45.489 "raid_level": "raid1", 00:22:45.489 "superblock": false, 00:22:45.489 "num_base_bdevs": 4, 00:22:45.489 "num_base_bdevs_discovered": 4, 00:22:45.489 "num_base_bdevs_operational": 4, 00:22:45.489 "base_bdevs_list": [ 00:22:45.489 { 00:22:45.489 "name": "NewBaseBdev", 00:22:45.489 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:45.490 "is_configured": true, 00:22:45.490 "data_offset": 0, 00:22:45.490 "data_size": 65536 00:22:45.490 }, 00:22:45.490 { 00:22:45.490 "name": "BaseBdev2", 00:22:45.490 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:45.490 "is_configured": true, 00:22:45.490 "data_offset": 0, 00:22:45.490 "data_size": 65536 00:22:45.490 }, 00:22:45.490 { 00:22:45.490 "name": "BaseBdev3", 00:22:45.490 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:45.490 "is_configured": true, 00:22:45.490 "data_offset": 0, 00:22:45.490 "data_size": 65536 00:22:45.490 }, 00:22:45.490 { 00:22:45.490 "name": "BaseBdev4", 00:22:45.490 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:45.490 "is_configured": true, 00:22:45.490 "data_offset": 0, 00:22:45.490 "data_size": 65536 00:22:45.490 } 00:22:45.490 ] 00:22:45.490 }' 00:22:45.490 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.490 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:45.752 [2024-12-09 23:04:20.943475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.752 "name": "Existed_Raid", 00:22:45.752 "aliases": [ 00:22:45.752 "58f27e09-d693-4197-bbfb-b1e789198122" 00:22:45.752 ], 00:22:45.752 "product_name": "Raid Volume", 00:22:45.752 "block_size": 512, 00:22:45.752 "num_blocks": 65536, 00:22:45.752 "uuid": "58f27e09-d693-4197-bbfb-b1e789198122", 00:22:45.752 "assigned_rate_limits": { 00:22:45.752 "rw_ios_per_sec": 0, 00:22:45.752 "rw_mbytes_per_sec": 0, 00:22:45.752 "r_mbytes_per_sec": 0, 00:22:45.752 "w_mbytes_per_sec": 0 00:22:45.752 }, 00:22:45.752 "claimed": false, 00:22:45.752 "zoned": false, 00:22:45.752 "supported_io_types": { 00:22:45.752 "read": true, 00:22:45.752 "write": true, 00:22:45.752 "unmap": false, 00:22:45.752 "flush": false, 00:22:45.752 "reset": true, 00:22:45.752 "nvme_admin": false, 00:22:45.752 "nvme_io": false, 00:22:45.752 "nvme_io_md": false, 00:22:45.752 "write_zeroes": true, 00:22:45.752 "zcopy": false, 00:22:45.752 "get_zone_info": false, 00:22:45.752 "zone_management": false, 00:22:45.752 "zone_append": false, 00:22:45.752 "compare": false, 00:22:45.752 "compare_and_write": false, 00:22:45.752 "abort": false, 00:22:45.752 "seek_hole": false, 00:22:45.752 "seek_data": false, 00:22:45.752 "copy": false, 00:22:45.752 "nvme_iov_md": false 00:22:45.752 }, 00:22:45.752 "memory_domains": [ 00:22:45.752 { 00:22:45.752 "dma_device_id": "system", 00:22:45.752 "dma_device_type": 1 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.752 "dma_device_type": 2 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "system", 00:22:45.752 "dma_device_type": 1 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.752 "dma_device_type": 2 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "system", 00:22:45.752 "dma_device_type": 1 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.752 "dma_device_type": 2 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "system", 00:22:45.752 "dma_device_type": 1 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.752 "dma_device_type": 2 00:22:45.752 } 00:22:45.752 ], 00:22:45.752 "driver_specific": { 00:22:45.752 "raid": { 00:22:45.752 "uuid": "58f27e09-d693-4197-bbfb-b1e789198122", 00:22:45.752 "strip_size_kb": 0, 00:22:45.752 "state": "online", 00:22:45.752 "raid_level": "raid1", 00:22:45.752 "superblock": false, 00:22:45.752 "num_base_bdevs": 4, 00:22:45.752 "num_base_bdevs_discovered": 4, 00:22:45.752 "num_base_bdevs_operational": 4, 00:22:45.752 "base_bdevs_list": [ 00:22:45.752 { 00:22:45.752 "name": "NewBaseBdev", 00:22:45.752 "uuid": "7cc571b4-65c3-4cd0-87be-97358af88aa0", 00:22:45.752 "is_configured": true, 00:22:45.752 "data_offset": 0, 00:22:45.752 "data_size": 65536 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "name": "BaseBdev2", 00:22:45.752 "uuid": "3dc0661a-57f2-4cbc-992a-7806bceb01cd", 00:22:45.752 "is_configured": true, 00:22:45.752 "data_offset": 0, 00:22:45.752 "data_size": 65536 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "name": "BaseBdev3", 00:22:45.752 "uuid": "e2abaa1d-2f6e-49b8-a3e8-66e4d4849ce6", 00:22:45.752 "is_configured": true, 00:22:45.752 "data_offset": 0, 00:22:45.752 "data_size": 65536 00:22:45.752 }, 00:22:45.752 { 00:22:45.752 "name": "BaseBdev4", 00:22:45.752 "uuid": "3ae011d9-b1ca-4c05-a9f2-4da29875d543", 00:22:45.752 "is_configured": true, 00:22:45.752 "data_offset": 0, 00:22:45.752 "data_size": 65536 00:22:45.752 } 00:22:45.752 ] 00:22:45.752 } 00:22:45.752 } 00:22:45.752 }' 00:22:45.752 23:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:45.752 BaseBdev2 00:22:45.752 BaseBdev3 00:22:45.752 BaseBdev4' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.752 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.013 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.014 [2024-12-09 23:04:21.215134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:46.014 [2024-12-09 23:04:21.215211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.014 [2024-12-09 23:04:21.215318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.014 [2024-12-09 23:04:21.215659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.014 [2024-12-09 23:04:21.215685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71304 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71304 ']' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71304 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71304 00:22:46.014 killing process with pid 71304 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71304' 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71304 00:22:46.014 23:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71304 00:22:46.014 [2024-12-09 23:04:21.250473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:46.274 [2024-12-09 23:04:21.536471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:47.218 ************************************ 00:22:47.218 END TEST raid_state_function_test 00:22:47.218 ************************************ 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:47.218 00:22:47.218 real 0m9.503s 00:22:47.218 user 0m14.827s 00:22:47.218 sys 0m1.777s 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.218 23:04:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:47.218 23:04:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:47.218 23:04:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.218 23:04:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:47.218 ************************************ 00:22:47.218 START TEST raid_state_function_test_sb 00:22:47.218 ************************************ 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:47.218 Process raid pid: 71954 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71954 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71954' 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71954 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71954 ']' 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:47.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.218 23:04:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.218 [2024-12-09 23:04:22.532815] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:47.218 [2024-12-09 23:04:22.532991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.477 [2024-12-09 23:04:22.703469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.736 [2024-12-09 23:04:22.845851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.736 [2024-12-09 23:04:23.020570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.736 [2024-12-09 23:04:23.020661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.305 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.305 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:48.305 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:48.305 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.305 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.305 [2024-12-09 23:04:23.433376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.305 [2024-12-09 23:04:23.433466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.305 [2024-12-09 23:04:23.433478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.305 [2024-12-09 23:04:23.433489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.305 [2024-12-09 23:04:23.433496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.305 [2024-12-09 23:04:23.433505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.306 [2024-12-09 23:04:23.433512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:48.306 [2024-12-09 23:04:23.433521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.306 "name": "Existed_Raid", 00:22:48.306 "uuid": "fda91251-2e86-49e3-9f7d-575ccc3d7a73", 00:22:48.306 "strip_size_kb": 0, 00:22:48.306 "state": "configuring", 00:22:48.306 "raid_level": "raid1", 00:22:48.306 "superblock": true, 00:22:48.306 "num_base_bdevs": 4, 00:22:48.306 "num_base_bdevs_discovered": 0, 00:22:48.306 "num_base_bdevs_operational": 4, 00:22:48.306 "base_bdevs_list": [ 00:22:48.306 { 00:22:48.306 "name": "BaseBdev1", 00:22:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.306 "is_configured": false, 00:22:48.306 "data_offset": 0, 00:22:48.306 "data_size": 0 00:22:48.306 }, 00:22:48.306 { 00:22:48.306 "name": "BaseBdev2", 00:22:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.306 "is_configured": false, 00:22:48.306 "data_offset": 0, 00:22:48.306 "data_size": 0 00:22:48.306 }, 00:22:48.306 { 00:22:48.306 "name": "BaseBdev3", 00:22:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.306 "is_configured": false, 00:22:48.306 "data_offset": 0, 00:22:48.306 "data_size": 0 00:22:48.306 }, 00:22:48.306 { 00:22:48.306 "name": "BaseBdev4", 00:22:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.306 "is_configured": false, 00:22:48.306 "data_offset": 0, 00:22:48.306 "data_size": 0 00:22:48.306 } 00:22:48.306 ] 00:22:48.306 }' 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.306 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.567 [2024-12-09 23:04:23.809370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.567 [2024-12-09 23:04:23.809438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.567 [2024-12-09 23:04:23.821403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.567 [2024-12-09 23:04:23.821480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.567 [2024-12-09 23:04:23.821492] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.567 [2024-12-09 23:04:23.821502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.567 [2024-12-09 23:04:23.821509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.567 [2024-12-09 23:04:23.821518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.567 [2024-12-09 23:04:23.821525] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:48.567 [2024-12-09 23:04:23.821535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:48.567 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.568 [2024-12-09 23:04:23.864270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.568 BaseBdev1 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.568 [ 00:22:48.568 { 00:22:48.568 "name": "BaseBdev1", 00:22:48.568 "aliases": [ 00:22:48.568 "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50" 00:22:48.568 ], 00:22:48.568 "product_name": "Malloc disk", 00:22:48.568 "block_size": 512, 00:22:48.568 "num_blocks": 65536, 00:22:48.568 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:48.568 "assigned_rate_limits": { 00:22:48.568 "rw_ios_per_sec": 0, 00:22:48.568 "rw_mbytes_per_sec": 0, 00:22:48.568 "r_mbytes_per_sec": 0, 00:22:48.568 "w_mbytes_per_sec": 0 00:22:48.568 }, 00:22:48.568 "claimed": true, 00:22:48.568 "claim_type": "exclusive_write", 00:22:48.568 "zoned": false, 00:22:48.568 "supported_io_types": { 00:22:48.568 "read": true, 00:22:48.568 "write": true, 00:22:48.568 "unmap": true, 00:22:48.568 "flush": true, 00:22:48.568 "reset": true, 00:22:48.568 "nvme_admin": false, 00:22:48.568 "nvme_io": false, 00:22:48.568 "nvme_io_md": false, 00:22:48.568 "write_zeroes": true, 00:22:48.568 "zcopy": true, 00:22:48.568 "get_zone_info": false, 00:22:48.568 "zone_management": false, 00:22:48.568 "zone_append": false, 00:22:48.568 "compare": false, 00:22:48.568 "compare_and_write": false, 00:22:48.568 "abort": true, 00:22:48.568 "seek_hole": false, 00:22:48.568 "seek_data": false, 00:22:48.568 "copy": true, 00:22:48.568 "nvme_iov_md": false 00:22:48.568 }, 00:22:48.568 "memory_domains": [ 00:22:48.568 { 00:22:48.568 "dma_device_id": "system", 00:22:48.568 "dma_device_type": 1 00:22:48.568 }, 00:22:48.568 { 00:22:48.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.568 "dma_device_type": 2 00:22:48.568 } 00:22:48.568 ], 00:22:48.568 "driver_specific": {} 00:22:48.568 } 00:22:48.568 ] 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.568 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.829 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.829 "name": "Existed_Raid", 00:22:48.829 "uuid": "e3da330d-0175-4462-8a32-218d8817856e", 00:22:48.829 "strip_size_kb": 0, 00:22:48.829 "state": "configuring", 00:22:48.829 "raid_level": "raid1", 00:22:48.829 "superblock": true, 00:22:48.829 "num_base_bdevs": 4, 00:22:48.829 "num_base_bdevs_discovered": 1, 00:22:48.829 "num_base_bdevs_operational": 4, 00:22:48.829 "base_bdevs_list": [ 00:22:48.829 { 00:22:48.829 "name": "BaseBdev1", 00:22:48.829 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:48.829 "is_configured": true, 00:22:48.829 "data_offset": 2048, 00:22:48.829 "data_size": 63488 00:22:48.829 }, 00:22:48.829 { 00:22:48.829 "name": "BaseBdev2", 00:22:48.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.829 "is_configured": false, 00:22:48.829 "data_offset": 0, 00:22:48.829 "data_size": 0 00:22:48.829 }, 00:22:48.829 { 00:22:48.829 "name": "BaseBdev3", 00:22:48.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.829 "is_configured": false, 00:22:48.829 "data_offset": 0, 00:22:48.829 "data_size": 0 00:22:48.829 }, 00:22:48.829 { 00:22:48.829 "name": "BaseBdev4", 00:22:48.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.829 "is_configured": false, 00:22:48.829 "data_offset": 0, 00:22:48.829 "data_size": 0 00:22:48.829 } 00:22:48.829 ] 00:22:48.829 }' 00:22:48.829 23:04:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.829 23:04:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.090 [2024-12-09 23:04:24.248439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:49.090 [2024-12-09 23:04:24.248520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.090 [2024-12-09 23:04:24.256512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.090 [2024-12-09 23:04:24.258701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:49.090 [2024-12-09 23:04:24.258765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:49.090 [2024-12-09 23:04:24.258776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:49.090 [2024-12-09 23:04:24.258788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:49.090 [2024-12-09 23:04:24.258796] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:49.090 [2024-12-09 23:04:24.258804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.090 "name": "Existed_Raid", 00:22:49.090 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:49.090 "strip_size_kb": 0, 00:22:49.090 "state": "configuring", 00:22:49.090 "raid_level": "raid1", 00:22:49.090 "superblock": true, 00:22:49.090 "num_base_bdevs": 4, 00:22:49.090 "num_base_bdevs_discovered": 1, 00:22:49.090 "num_base_bdevs_operational": 4, 00:22:49.090 "base_bdevs_list": [ 00:22:49.090 { 00:22:49.090 "name": "BaseBdev1", 00:22:49.090 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:49.090 "is_configured": true, 00:22:49.090 "data_offset": 2048, 00:22:49.090 "data_size": 63488 00:22:49.090 }, 00:22:49.090 { 00:22:49.090 "name": "BaseBdev2", 00:22:49.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.090 "is_configured": false, 00:22:49.090 "data_offset": 0, 00:22:49.090 "data_size": 0 00:22:49.090 }, 00:22:49.090 { 00:22:49.090 "name": "BaseBdev3", 00:22:49.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.090 "is_configured": false, 00:22:49.090 "data_offset": 0, 00:22:49.090 "data_size": 0 00:22:49.090 }, 00:22:49.090 { 00:22:49.090 "name": "BaseBdev4", 00:22:49.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.090 "is_configured": false, 00:22:49.090 "data_offset": 0, 00:22:49.090 "data_size": 0 00:22:49.090 } 00:22:49.090 ] 00:22:49.090 }' 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.090 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.350 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.350 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.350 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.350 [2024-12-09 23:04:24.660086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.350 BaseBdev2 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.351 [ 00:22:49.351 { 00:22:49.351 "name": "BaseBdev2", 00:22:49.351 "aliases": [ 00:22:49.351 "8ca1e451-a529-4e44-bb46-9653ccd7cf0b" 00:22:49.351 ], 00:22:49.351 "product_name": "Malloc disk", 00:22:49.351 "block_size": 512, 00:22:49.351 "num_blocks": 65536, 00:22:49.351 "uuid": "8ca1e451-a529-4e44-bb46-9653ccd7cf0b", 00:22:49.351 "assigned_rate_limits": { 00:22:49.351 "rw_ios_per_sec": 0, 00:22:49.351 "rw_mbytes_per_sec": 0, 00:22:49.351 "r_mbytes_per_sec": 0, 00:22:49.351 "w_mbytes_per_sec": 0 00:22:49.351 }, 00:22:49.351 "claimed": true, 00:22:49.351 "claim_type": "exclusive_write", 00:22:49.351 "zoned": false, 00:22:49.351 "supported_io_types": { 00:22:49.351 "read": true, 00:22:49.351 "write": true, 00:22:49.351 "unmap": true, 00:22:49.351 "flush": true, 00:22:49.351 "reset": true, 00:22:49.351 "nvme_admin": false, 00:22:49.351 "nvme_io": false, 00:22:49.351 "nvme_io_md": false, 00:22:49.351 "write_zeroes": true, 00:22:49.351 "zcopy": true, 00:22:49.351 "get_zone_info": false, 00:22:49.351 "zone_management": false, 00:22:49.351 "zone_append": false, 00:22:49.351 "compare": false, 00:22:49.351 "compare_and_write": false, 00:22:49.351 "abort": true, 00:22:49.351 "seek_hole": false, 00:22:49.351 "seek_data": false, 00:22:49.351 "copy": true, 00:22:49.351 "nvme_iov_md": false 00:22:49.351 }, 00:22:49.351 "memory_domains": [ 00:22:49.351 { 00:22:49.351 "dma_device_id": "system", 00:22:49.351 "dma_device_type": 1 00:22:49.351 }, 00:22:49.351 { 00:22:49.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.351 "dma_device_type": 2 00:22:49.351 } 00:22:49.351 ], 00:22:49.351 "driver_specific": {} 00:22:49.351 } 00:22:49.351 ] 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.351 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.610 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.610 "name": "Existed_Raid", 00:22:49.610 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:49.610 "strip_size_kb": 0, 00:22:49.610 "state": "configuring", 00:22:49.610 "raid_level": "raid1", 00:22:49.610 "superblock": true, 00:22:49.610 "num_base_bdevs": 4, 00:22:49.610 "num_base_bdevs_discovered": 2, 00:22:49.610 "num_base_bdevs_operational": 4, 00:22:49.610 "base_bdevs_list": [ 00:22:49.610 { 00:22:49.610 "name": "BaseBdev1", 00:22:49.610 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:49.610 "is_configured": true, 00:22:49.610 "data_offset": 2048, 00:22:49.610 "data_size": 63488 00:22:49.610 }, 00:22:49.610 { 00:22:49.610 "name": "BaseBdev2", 00:22:49.610 "uuid": "8ca1e451-a529-4e44-bb46-9653ccd7cf0b", 00:22:49.610 "is_configured": true, 00:22:49.610 "data_offset": 2048, 00:22:49.610 "data_size": 63488 00:22:49.610 }, 00:22:49.610 { 00:22:49.610 "name": "BaseBdev3", 00:22:49.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.610 "is_configured": false, 00:22:49.610 "data_offset": 0, 00:22:49.610 "data_size": 0 00:22:49.610 }, 00:22:49.610 { 00:22:49.610 "name": "BaseBdev4", 00:22:49.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.610 "is_configured": false, 00:22:49.610 "data_offset": 0, 00:22:49.610 "data_size": 0 00:22:49.610 } 00:22:49.610 ] 00:22:49.610 }' 00:22:49.610 23:04:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.610 23:04:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 [2024-12-09 23:04:25.048604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:49.870 BaseBdev3 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 [ 00:22:49.870 { 00:22:49.870 "name": "BaseBdev3", 00:22:49.870 "aliases": [ 00:22:49.870 "148ee16b-f8af-4e18-9893-76e79e67547a" 00:22:49.870 ], 00:22:49.870 "product_name": "Malloc disk", 00:22:49.870 "block_size": 512, 00:22:49.870 "num_blocks": 65536, 00:22:49.870 "uuid": "148ee16b-f8af-4e18-9893-76e79e67547a", 00:22:49.870 "assigned_rate_limits": { 00:22:49.870 "rw_ios_per_sec": 0, 00:22:49.870 "rw_mbytes_per_sec": 0, 00:22:49.870 "r_mbytes_per_sec": 0, 00:22:49.870 "w_mbytes_per_sec": 0 00:22:49.870 }, 00:22:49.870 "claimed": true, 00:22:49.870 "claim_type": "exclusive_write", 00:22:49.870 "zoned": false, 00:22:49.870 "supported_io_types": { 00:22:49.870 "read": true, 00:22:49.870 "write": true, 00:22:49.870 "unmap": true, 00:22:49.870 "flush": true, 00:22:49.870 "reset": true, 00:22:49.870 "nvme_admin": false, 00:22:49.870 "nvme_io": false, 00:22:49.870 "nvme_io_md": false, 00:22:49.870 "write_zeroes": true, 00:22:49.870 "zcopy": true, 00:22:49.870 "get_zone_info": false, 00:22:49.870 "zone_management": false, 00:22:49.870 "zone_append": false, 00:22:49.870 "compare": false, 00:22:49.870 "compare_and_write": false, 00:22:49.870 "abort": true, 00:22:49.870 "seek_hole": false, 00:22:49.870 "seek_data": false, 00:22:49.870 "copy": true, 00:22:49.870 "nvme_iov_md": false 00:22:49.870 }, 00:22:49.870 "memory_domains": [ 00:22:49.870 { 00:22:49.870 "dma_device_id": "system", 00:22:49.870 "dma_device_type": 1 00:22:49.870 }, 00:22:49.870 { 00:22:49.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.870 "dma_device_type": 2 00:22:49.870 } 00:22:49.870 ], 00:22:49.870 "driver_specific": {} 00:22:49.870 } 00:22:49.870 ] 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.870 "name": "Existed_Raid", 00:22:49.870 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:49.870 "strip_size_kb": 0, 00:22:49.870 "state": "configuring", 00:22:49.870 "raid_level": "raid1", 00:22:49.870 "superblock": true, 00:22:49.870 "num_base_bdevs": 4, 00:22:49.870 "num_base_bdevs_discovered": 3, 00:22:49.870 "num_base_bdevs_operational": 4, 00:22:49.870 "base_bdevs_list": [ 00:22:49.870 { 00:22:49.870 "name": "BaseBdev1", 00:22:49.870 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:49.870 "is_configured": true, 00:22:49.870 "data_offset": 2048, 00:22:49.870 "data_size": 63488 00:22:49.871 }, 00:22:49.871 { 00:22:49.871 "name": "BaseBdev2", 00:22:49.871 "uuid": "8ca1e451-a529-4e44-bb46-9653ccd7cf0b", 00:22:49.871 "is_configured": true, 00:22:49.871 "data_offset": 2048, 00:22:49.871 "data_size": 63488 00:22:49.871 }, 00:22:49.871 { 00:22:49.871 "name": "BaseBdev3", 00:22:49.871 "uuid": "148ee16b-f8af-4e18-9893-76e79e67547a", 00:22:49.871 "is_configured": true, 00:22:49.871 "data_offset": 2048, 00:22:49.871 "data_size": 63488 00:22:49.871 }, 00:22:49.871 { 00:22:49.871 "name": "BaseBdev4", 00:22:49.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.871 "is_configured": false, 00:22:49.871 "data_offset": 0, 00:22:49.871 "data_size": 0 00:22:49.871 } 00:22:49.871 ] 00:22:49.871 }' 00:22:49.871 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.871 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.132 [2024-12-09 23:04:25.424091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:50.132 [2024-12-09 23:04:25.424378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:50.132 [2024-12-09 23:04:25.424391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:50.132 [2024-12-09 23:04:25.424668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:50.132 BaseBdev4 00:22:50.132 [2024-12-09 23:04:25.424820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:50.132 [2024-12-09 23:04:25.424831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:50.132 [2024-12-09 23:04:25.424962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.132 [ 00:22:50.132 { 00:22:50.132 "name": "BaseBdev4", 00:22:50.132 "aliases": [ 00:22:50.132 "0f7ca4f9-c2d6-4f6c-a8c1-14cabb7106df" 00:22:50.132 ], 00:22:50.132 "product_name": "Malloc disk", 00:22:50.132 "block_size": 512, 00:22:50.132 "num_blocks": 65536, 00:22:50.132 "uuid": "0f7ca4f9-c2d6-4f6c-a8c1-14cabb7106df", 00:22:50.132 "assigned_rate_limits": { 00:22:50.132 "rw_ios_per_sec": 0, 00:22:50.132 "rw_mbytes_per_sec": 0, 00:22:50.132 "r_mbytes_per_sec": 0, 00:22:50.132 "w_mbytes_per_sec": 0 00:22:50.132 }, 00:22:50.132 "claimed": true, 00:22:50.132 "claim_type": "exclusive_write", 00:22:50.132 "zoned": false, 00:22:50.132 "supported_io_types": { 00:22:50.132 "read": true, 00:22:50.132 "write": true, 00:22:50.132 "unmap": true, 00:22:50.132 "flush": true, 00:22:50.132 "reset": true, 00:22:50.132 "nvme_admin": false, 00:22:50.132 "nvme_io": false, 00:22:50.132 "nvme_io_md": false, 00:22:50.132 "write_zeroes": true, 00:22:50.132 "zcopy": true, 00:22:50.132 "get_zone_info": false, 00:22:50.132 "zone_management": false, 00:22:50.132 "zone_append": false, 00:22:50.132 "compare": false, 00:22:50.132 "compare_and_write": false, 00:22:50.132 "abort": true, 00:22:50.132 "seek_hole": false, 00:22:50.132 "seek_data": false, 00:22:50.132 "copy": true, 00:22:50.132 "nvme_iov_md": false 00:22:50.132 }, 00:22:50.132 "memory_domains": [ 00:22:50.132 { 00:22:50.132 "dma_device_id": "system", 00:22:50.132 "dma_device_type": 1 00:22:50.132 }, 00:22:50.132 { 00:22:50.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.132 "dma_device_type": 2 00:22:50.132 } 00:22:50.132 ], 00:22:50.132 "driver_specific": {} 00:22:50.132 } 00:22:50.132 ] 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.132 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.394 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.394 "name": "Existed_Raid", 00:22:50.394 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:50.394 "strip_size_kb": 0, 00:22:50.394 "state": "online", 00:22:50.394 "raid_level": "raid1", 00:22:50.394 "superblock": true, 00:22:50.394 "num_base_bdevs": 4, 00:22:50.394 "num_base_bdevs_discovered": 4, 00:22:50.394 "num_base_bdevs_operational": 4, 00:22:50.394 "base_bdevs_list": [ 00:22:50.394 { 00:22:50.394 "name": "BaseBdev1", 00:22:50.394 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:50.394 "is_configured": true, 00:22:50.394 "data_offset": 2048, 00:22:50.394 "data_size": 63488 00:22:50.394 }, 00:22:50.394 { 00:22:50.394 "name": "BaseBdev2", 00:22:50.394 "uuid": "8ca1e451-a529-4e44-bb46-9653ccd7cf0b", 00:22:50.394 "is_configured": true, 00:22:50.394 "data_offset": 2048, 00:22:50.394 "data_size": 63488 00:22:50.394 }, 00:22:50.394 { 00:22:50.394 "name": "BaseBdev3", 00:22:50.394 "uuid": "148ee16b-f8af-4e18-9893-76e79e67547a", 00:22:50.394 "is_configured": true, 00:22:50.394 "data_offset": 2048, 00:22:50.394 "data_size": 63488 00:22:50.394 }, 00:22:50.394 { 00:22:50.394 "name": "BaseBdev4", 00:22:50.394 "uuid": "0f7ca4f9-c2d6-4f6c-a8c1-14cabb7106df", 00:22:50.394 "is_configured": true, 00:22:50.394 "data_offset": 2048, 00:22:50.394 "data_size": 63488 00:22:50.394 } 00:22:50.394 ] 00:22:50.394 }' 00:22:50.394 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.394 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.661 [2024-12-09 23:04:25.784611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.661 "name": "Existed_Raid", 00:22:50.661 "aliases": [ 00:22:50.661 "505675ed-7f22-4b99-a0d2-f5334bf94cc8" 00:22:50.661 ], 00:22:50.661 "product_name": "Raid Volume", 00:22:50.661 "block_size": 512, 00:22:50.661 "num_blocks": 63488, 00:22:50.661 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:50.661 "assigned_rate_limits": { 00:22:50.661 "rw_ios_per_sec": 0, 00:22:50.661 "rw_mbytes_per_sec": 0, 00:22:50.661 "r_mbytes_per_sec": 0, 00:22:50.661 "w_mbytes_per_sec": 0 00:22:50.661 }, 00:22:50.661 "claimed": false, 00:22:50.661 "zoned": false, 00:22:50.661 "supported_io_types": { 00:22:50.661 "read": true, 00:22:50.661 "write": true, 00:22:50.661 "unmap": false, 00:22:50.661 "flush": false, 00:22:50.661 "reset": true, 00:22:50.661 "nvme_admin": false, 00:22:50.661 "nvme_io": false, 00:22:50.661 "nvme_io_md": false, 00:22:50.661 "write_zeroes": true, 00:22:50.661 "zcopy": false, 00:22:50.661 "get_zone_info": false, 00:22:50.661 "zone_management": false, 00:22:50.661 "zone_append": false, 00:22:50.661 "compare": false, 00:22:50.661 "compare_and_write": false, 00:22:50.661 "abort": false, 00:22:50.661 "seek_hole": false, 00:22:50.661 "seek_data": false, 00:22:50.661 "copy": false, 00:22:50.661 "nvme_iov_md": false 00:22:50.661 }, 00:22:50.661 "memory_domains": [ 00:22:50.661 { 00:22:50.661 "dma_device_id": "system", 00:22:50.661 "dma_device_type": 1 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.661 "dma_device_type": 2 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "system", 00:22:50.661 "dma_device_type": 1 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.661 "dma_device_type": 2 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "system", 00:22:50.661 "dma_device_type": 1 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.661 "dma_device_type": 2 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "system", 00:22:50.661 "dma_device_type": 1 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.661 "dma_device_type": 2 00:22:50.661 } 00:22:50.661 ], 00:22:50.661 "driver_specific": { 00:22:50.661 "raid": { 00:22:50.661 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:50.661 "strip_size_kb": 0, 00:22:50.661 "state": "online", 00:22:50.661 "raid_level": "raid1", 00:22:50.661 "superblock": true, 00:22:50.661 "num_base_bdevs": 4, 00:22:50.661 "num_base_bdevs_discovered": 4, 00:22:50.661 "num_base_bdevs_operational": 4, 00:22:50.661 "base_bdevs_list": [ 00:22:50.661 { 00:22:50.661 "name": "BaseBdev1", 00:22:50.661 "uuid": "af9e3a9c-cf54-4966-ac21-ce4d80bc4d50", 00:22:50.661 "is_configured": true, 00:22:50.661 "data_offset": 2048, 00:22:50.661 "data_size": 63488 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "name": "BaseBdev2", 00:22:50.661 "uuid": "8ca1e451-a529-4e44-bb46-9653ccd7cf0b", 00:22:50.661 "is_configured": true, 00:22:50.661 "data_offset": 2048, 00:22:50.661 "data_size": 63488 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "name": "BaseBdev3", 00:22:50.661 "uuid": "148ee16b-f8af-4e18-9893-76e79e67547a", 00:22:50.661 "is_configured": true, 00:22:50.661 "data_offset": 2048, 00:22:50.661 "data_size": 63488 00:22:50.661 }, 00:22:50.661 { 00:22:50.661 "name": "BaseBdev4", 00:22:50.661 "uuid": "0f7ca4f9-c2d6-4f6c-a8c1-14cabb7106df", 00:22:50.661 "is_configured": true, 00:22:50.661 "data_offset": 2048, 00:22:50.661 "data_size": 63488 00:22:50.661 } 00:22:50.661 ] 00:22:50.661 } 00:22:50.661 } 00:22:50.661 }' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:50.661 BaseBdev2 00:22:50.661 BaseBdev3 00:22:50.661 BaseBdev4' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.661 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.662 23:04:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.662 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:50.662 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.662 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.662 [2024-12-09 23:04:26.004338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.965 "name": "Existed_Raid", 00:22:50.965 "uuid": "505675ed-7f22-4b99-a0d2-f5334bf94cc8", 00:22:50.965 "strip_size_kb": 0, 00:22:50.965 "state": "online", 00:22:50.965 "raid_level": "raid1", 00:22:50.965 "superblock": true, 00:22:50.965 "num_base_bdevs": 4, 00:22:50.965 "num_base_bdevs_discovered": 3, 00:22:50.965 "num_base_bdevs_operational": 3, 00:22:50.965 "base_bdevs_list": [ 00:22:50.965 { 00:22:50.965 "name": null, 00:22:50.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.965 "is_configured": false, 00:22:50.965 "data_offset": 0, 00:22:50.965 "data_size": 63488 00:22:50.965 }, 00:22:50.965 { 00:22:50.965 "name": "BaseBdev2", 00:22:50.965 "uuid": "8ca1e451-a529-4e44-bb46-9653ccd7cf0b", 00:22:50.965 "is_configured": true, 00:22:50.965 "data_offset": 2048, 00:22:50.965 "data_size": 63488 00:22:50.965 }, 00:22:50.965 { 00:22:50.965 "name": "BaseBdev3", 00:22:50.965 "uuid": "148ee16b-f8af-4e18-9893-76e79e67547a", 00:22:50.965 "is_configured": true, 00:22:50.965 "data_offset": 2048, 00:22:50.965 "data_size": 63488 00:22:50.965 }, 00:22:50.965 { 00:22:50.965 "name": "BaseBdev4", 00:22:50.965 "uuid": "0f7ca4f9-c2d6-4f6c-a8c1-14cabb7106df", 00:22:50.965 "is_configured": true, 00:22:50.965 "data_offset": 2048, 00:22:50.965 "data_size": 63488 00:22:50.965 } 00:22:50.965 ] 00:22:50.965 }' 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.965 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.225 [2024-12-09 23:04:26.400002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.225 [2024-12-09 23:04:26.499925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.225 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.494 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:51.494 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:51.494 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:51.494 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.494 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.494 [2024-12-09 23:04:26.597145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:51.494 [2024-12-09 23:04:26.597238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.494 [2024-12-09 23:04:26.656588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.494 [2024-12-09 23:04:26.656644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.495 [2024-12-09 23:04:26.656656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.495 BaseBdev2 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.495 [ 00:22:51.495 { 00:22:51.495 "name": "BaseBdev2", 00:22:51.495 "aliases": [ 00:22:51.495 "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c" 00:22:51.495 ], 00:22:51.495 "product_name": "Malloc disk", 00:22:51.495 "block_size": 512, 00:22:51.495 "num_blocks": 65536, 00:22:51.495 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:51.495 "assigned_rate_limits": { 00:22:51.495 "rw_ios_per_sec": 0, 00:22:51.495 "rw_mbytes_per_sec": 0, 00:22:51.495 "r_mbytes_per_sec": 0, 00:22:51.495 "w_mbytes_per_sec": 0 00:22:51.495 }, 00:22:51.495 "claimed": false, 00:22:51.495 "zoned": false, 00:22:51.495 "supported_io_types": { 00:22:51.495 "read": true, 00:22:51.495 "write": true, 00:22:51.495 "unmap": true, 00:22:51.495 "flush": true, 00:22:51.495 "reset": true, 00:22:51.495 "nvme_admin": false, 00:22:51.495 "nvme_io": false, 00:22:51.495 "nvme_io_md": false, 00:22:51.495 "write_zeroes": true, 00:22:51.495 "zcopy": true, 00:22:51.495 "get_zone_info": false, 00:22:51.495 "zone_management": false, 00:22:51.495 "zone_append": false, 00:22:51.495 "compare": false, 00:22:51.495 "compare_and_write": false, 00:22:51.495 "abort": true, 00:22:51.495 "seek_hole": false, 00:22:51.495 "seek_data": false, 00:22:51.495 "copy": true, 00:22:51.495 "nvme_iov_md": false 00:22:51.495 }, 00:22:51.495 "memory_domains": [ 00:22:51.495 { 00:22:51.495 "dma_device_id": "system", 00:22:51.495 "dma_device_type": 1 00:22:51.495 }, 00:22:51.495 { 00:22:51.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.495 "dma_device_type": 2 00:22:51.495 } 00:22:51.495 ], 00:22:51.495 "driver_specific": {} 00:22:51.495 } 00:22:51.495 ] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.495 BaseBdev3 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.495 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.496 [ 00:22:51.496 { 00:22:51.496 "name": "BaseBdev3", 00:22:51.496 "aliases": [ 00:22:51.496 "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b" 00:22:51.496 ], 00:22:51.496 "product_name": "Malloc disk", 00:22:51.496 "block_size": 512, 00:22:51.496 "num_blocks": 65536, 00:22:51.496 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:51.496 "assigned_rate_limits": { 00:22:51.496 "rw_ios_per_sec": 0, 00:22:51.496 "rw_mbytes_per_sec": 0, 00:22:51.496 "r_mbytes_per_sec": 0, 00:22:51.496 "w_mbytes_per_sec": 0 00:22:51.496 }, 00:22:51.496 "claimed": false, 00:22:51.496 "zoned": false, 00:22:51.496 "supported_io_types": { 00:22:51.496 "read": true, 00:22:51.496 "write": true, 00:22:51.496 "unmap": true, 00:22:51.496 "flush": true, 00:22:51.496 "reset": true, 00:22:51.496 "nvme_admin": false, 00:22:51.496 "nvme_io": false, 00:22:51.496 "nvme_io_md": false, 00:22:51.496 "write_zeroes": true, 00:22:51.496 "zcopy": true, 00:22:51.496 "get_zone_info": false, 00:22:51.496 "zone_management": false, 00:22:51.496 "zone_append": false, 00:22:51.496 "compare": false, 00:22:51.496 "compare_and_write": false, 00:22:51.496 "abort": true, 00:22:51.496 "seek_hole": false, 00:22:51.496 "seek_data": false, 00:22:51.496 "copy": true, 00:22:51.496 "nvme_iov_md": false 00:22:51.496 }, 00:22:51.496 "memory_domains": [ 00:22:51.496 { 00:22:51.496 "dma_device_id": "system", 00:22:51.496 "dma_device_type": 1 00:22:51.496 }, 00:22:51.496 { 00:22:51.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.496 "dma_device_type": 2 00:22:51.496 } 00:22:51.496 ], 00:22:51.496 "driver_specific": {} 00:22:51.496 } 00:22:51.496 ] 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.496 BaseBdev4 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.496 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.496 [ 00:22:51.496 { 00:22:51.496 "name": "BaseBdev4", 00:22:51.496 "aliases": [ 00:22:51.496 "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1" 00:22:51.496 ], 00:22:51.496 "product_name": "Malloc disk", 00:22:51.496 "block_size": 512, 00:22:51.496 "num_blocks": 65536, 00:22:51.496 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:51.496 "assigned_rate_limits": { 00:22:51.759 "rw_ios_per_sec": 0, 00:22:51.759 "rw_mbytes_per_sec": 0, 00:22:51.759 "r_mbytes_per_sec": 0, 00:22:51.759 "w_mbytes_per_sec": 0 00:22:51.759 }, 00:22:51.759 "claimed": false, 00:22:51.759 "zoned": false, 00:22:51.759 "supported_io_types": { 00:22:51.759 "read": true, 00:22:51.759 "write": true, 00:22:51.759 "unmap": true, 00:22:51.759 "flush": true, 00:22:51.759 "reset": true, 00:22:51.759 "nvme_admin": false, 00:22:51.759 "nvme_io": false, 00:22:51.759 "nvme_io_md": false, 00:22:51.759 "write_zeroes": true, 00:22:51.759 "zcopy": true, 00:22:51.759 "get_zone_info": false, 00:22:51.759 "zone_management": false, 00:22:51.759 "zone_append": false, 00:22:51.759 "compare": false, 00:22:51.759 "compare_and_write": false, 00:22:51.759 "abort": true, 00:22:51.759 "seek_hole": false, 00:22:51.759 "seek_data": false, 00:22:51.759 "copy": true, 00:22:51.759 "nvme_iov_md": false 00:22:51.759 }, 00:22:51.759 "memory_domains": [ 00:22:51.759 { 00:22:51.759 "dma_device_id": "system", 00:22:51.759 "dma_device_type": 1 00:22:51.759 }, 00:22:51.759 { 00:22:51.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.759 "dma_device_type": 2 00:22:51.759 } 00:22:51.759 ], 00:22:51.759 "driver_specific": {} 00:22:51.759 } 00:22:51.759 ] 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.759 [2024-12-09 23:04:26.857314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.759 [2024-12-09 23:04:26.857367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.759 [2024-12-09 23:04:26.857389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:51.759 [2024-12-09 23:04:26.859274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.759 [2024-12-09 23:04:26.859326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.759 "name": "Existed_Raid", 00:22:51.759 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:51.759 "strip_size_kb": 0, 00:22:51.759 "state": "configuring", 00:22:51.759 "raid_level": "raid1", 00:22:51.759 "superblock": true, 00:22:51.759 "num_base_bdevs": 4, 00:22:51.759 "num_base_bdevs_discovered": 3, 00:22:51.759 "num_base_bdevs_operational": 4, 00:22:51.759 "base_bdevs_list": [ 00:22:51.759 { 00:22:51.759 "name": "BaseBdev1", 00:22:51.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.759 "is_configured": false, 00:22:51.759 "data_offset": 0, 00:22:51.759 "data_size": 0 00:22:51.759 }, 00:22:51.759 { 00:22:51.759 "name": "BaseBdev2", 00:22:51.759 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:51.759 "is_configured": true, 00:22:51.759 "data_offset": 2048, 00:22:51.759 "data_size": 63488 00:22:51.759 }, 00:22:51.759 { 00:22:51.759 "name": "BaseBdev3", 00:22:51.759 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:51.759 "is_configured": true, 00:22:51.759 "data_offset": 2048, 00:22:51.759 "data_size": 63488 00:22:51.759 }, 00:22:51.759 { 00:22:51.759 "name": "BaseBdev4", 00:22:51.759 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:51.759 "is_configured": true, 00:22:51.759 "data_offset": 2048, 00:22:51.759 "data_size": 63488 00:22:51.759 } 00:22:51.759 ] 00:22:51.759 }' 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.759 23:04:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.020 [2024-12-09 23:04:27.169395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.020 "name": "Existed_Raid", 00:22:52.020 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:52.020 "strip_size_kb": 0, 00:22:52.020 "state": "configuring", 00:22:52.020 "raid_level": "raid1", 00:22:52.020 "superblock": true, 00:22:52.020 "num_base_bdevs": 4, 00:22:52.020 "num_base_bdevs_discovered": 2, 00:22:52.020 "num_base_bdevs_operational": 4, 00:22:52.020 "base_bdevs_list": [ 00:22:52.020 { 00:22:52.020 "name": "BaseBdev1", 00:22:52.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.020 "is_configured": false, 00:22:52.020 "data_offset": 0, 00:22:52.020 "data_size": 0 00:22:52.020 }, 00:22:52.020 { 00:22:52.020 "name": null, 00:22:52.020 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:52.020 "is_configured": false, 00:22:52.020 "data_offset": 0, 00:22:52.020 "data_size": 63488 00:22:52.020 }, 00:22:52.020 { 00:22:52.020 "name": "BaseBdev3", 00:22:52.020 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:52.020 "is_configured": true, 00:22:52.020 "data_offset": 2048, 00:22:52.020 "data_size": 63488 00:22:52.020 }, 00:22:52.020 { 00:22:52.020 "name": "BaseBdev4", 00:22:52.020 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:52.020 "is_configured": true, 00:22:52.020 "data_offset": 2048, 00:22:52.020 "data_size": 63488 00:22:52.020 } 00:22:52.020 ] 00:22:52.020 }' 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.020 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.282 [2024-12-09 23:04:27.540010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:52.282 BaseBdev1 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.282 [ 00:22:52.282 { 00:22:52.282 "name": "BaseBdev1", 00:22:52.282 "aliases": [ 00:22:52.282 "f328074b-6aed-45ed-a517-7b6818464ed0" 00:22:52.282 ], 00:22:52.282 "product_name": "Malloc disk", 00:22:52.282 "block_size": 512, 00:22:52.282 "num_blocks": 65536, 00:22:52.282 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:52.282 "assigned_rate_limits": { 00:22:52.282 "rw_ios_per_sec": 0, 00:22:52.282 "rw_mbytes_per_sec": 0, 00:22:52.282 "r_mbytes_per_sec": 0, 00:22:52.282 "w_mbytes_per_sec": 0 00:22:52.282 }, 00:22:52.282 "claimed": true, 00:22:52.282 "claim_type": "exclusive_write", 00:22:52.282 "zoned": false, 00:22:52.282 "supported_io_types": { 00:22:52.282 "read": true, 00:22:52.282 "write": true, 00:22:52.282 "unmap": true, 00:22:52.282 "flush": true, 00:22:52.282 "reset": true, 00:22:52.282 "nvme_admin": false, 00:22:52.282 "nvme_io": false, 00:22:52.282 "nvme_io_md": false, 00:22:52.282 "write_zeroes": true, 00:22:52.282 "zcopy": true, 00:22:52.282 "get_zone_info": false, 00:22:52.282 "zone_management": false, 00:22:52.282 "zone_append": false, 00:22:52.282 "compare": false, 00:22:52.282 "compare_and_write": false, 00:22:52.282 "abort": true, 00:22:52.282 "seek_hole": false, 00:22:52.282 "seek_data": false, 00:22:52.282 "copy": true, 00:22:52.282 "nvme_iov_md": false 00:22:52.282 }, 00:22:52.282 "memory_domains": [ 00:22:52.282 { 00:22:52.282 "dma_device_id": "system", 00:22:52.282 "dma_device_type": 1 00:22:52.282 }, 00:22:52.282 { 00:22:52.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.282 "dma_device_type": 2 00:22:52.282 } 00:22:52.282 ], 00:22:52.282 "driver_specific": {} 00:22:52.282 } 00:22:52.282 ] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.282 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.282 "name": "Existed_Raid", 00:22:52.282 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:52.282 "strip_size_kb": 0, 00:22:52.282 "state": "configuring", 00:22:52.282 "raid_level": "raid1", 00:22:52.282 "superblock": true, 00:22:52.282 "num_base_bdevs": 4, 00:22:52.282 "num_base_bdevs_discovered": 3, 00:22:52.282 "num_base_bdevs_operational": 4, 00:22:52.282 "base_bdevs_list": [ 00:22:52.282 { 00:22:52.282 "name": "BaseBdev1", 00:22:52.282 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:52.282 "is_configured": true, 00:22:52.282 "data_offset": 2048, 00:22:52.282 "data_size": 63488 00:22:52.282 }, 00:22:52.282 { 00:22:52.282 "name": null, 00:22:52.282 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:52.282 "is_configured": false, 00:22:52.282 "data_offset": 0, 00:22:52.282 "data_size": 63488 00:22:52.282 }, 00:22:52.282 { 00:22:52.282 "name": "BaseBdev3", 00:22:52.282 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:52.282 "is_configured": true, 00:22:52.282 "data_offset": 2048, 00:22:52.282 "data_size": 63488 00:22:52.282 }, 00:22:52.283 { 00:22:52.283 "name": "BaseBdev4", 00:22:52.283 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:52.283 "is_configured": true, 00:22:52.283 "data_offset": 2048, 00:22:52.283 "data_size": 63488 00:22:52.283 } 00:22:52.283 ] 00:22:52.283 }' 00:22:52.283 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.283 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.543 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:52.543 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.543 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.543 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.806 [2024-12-09 23:04:27.932190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.806 "name": "Existed_Raid", 00:22:52.806 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:52.806 "strip_size_kb": 0, 00:22:52.806 "state": "configuring", 00:22:52.806 "raid_level": "raid1", 00:22:52.806 "superblock": true, 00:22:52.806 "num_base_bdevs": 4, 00:22:52.806 "num_base_bdevs_discovered": 2, 00:22:52.806 "num_base_bdevs_operational": 4, 00:22:52.806 "base_bdevs_list": [ 00:22:52.806 { 00:22:52.806 "name": "BaseBdev1", 00:22:52.806 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:52.806 "is_configured": true, 00:22:52.806 "data_offset": 2048, 00:22:52.806 "data_size": 63488 00:22:52.806 }, 00:22:52.806 { 00:22:52.806 "name": null, 00:22:52.806 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:52.806 "is_configured": false, 00:22:52.806 "data_offset": 0, 00:22:52.806 "data_size": 63488 00:22:52.806 }, 00:22:52.806 { 00:22:52.806 "name": null, 00:22:52.806 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:52.806 "is_configured": false, 00:22:52.806 "data_offset": 0, 00:22:52.806 "data_size": 63488 00:22:52.806 }, 00:22:52.806 { 00:22:52.806 "name": "BaseBdev4", 00:22:52.806 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:52.806 "is_configured": true, 00:22:52.806 "data_offset": 2048, 00:22:52.806 "data_size": 63488 00:22:52.806 } 00:22:52.806 ] 00:22:52.806 }' 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.806 23:04:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.069 [2024-12-09 23:04:28.296275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.069 "name": "Existed_Raid", 00:22:53.069 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:53.069 "strip_size_kb": 0, 00:22:53.069 "state": "configuring", 00:22:53.069 "raid_level": "raid1", 00:22:53.069 "superblock": true, 00:22:53.069 "num_base_bdevs": 4, 00:22:53.069 "num_base_bdevs_discovered": 3, 00:22:53.069 "num_base_bdevs_operational": 4, 00:22:53.069 "base_bdevs_list": [ 00:22:53.069 { 00:22:53.069 "name": "BaseBdev1", 00:22:53.069 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:53.069 "is_configured": true, 00:22:53.069 "data_offset": 2048, 00:22:53.069 "data_size": 63488 00:22:53.069 }, 00:22:53.069 { 00:22:53.069 "name": null, 00:22:53.069 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:53.069 "is_configured": false, 00:22:53.069 "data_offset": 0, 00:22:53.069 "data_size": 63488 00:22:53.069 }, 00:22:53.069 { 00:22:53.069 "name": "BaseBdev3", 00:22:53.069 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:53.069 "is_configured": true, 00:22:53.069 "data_offset": 2048, 00:22:53.069 "data_size": 63488 00:22:53.069 }, 00:22:53.069 { 00:22:53.069 "name": "BaseBdev4", 00:22:53.069 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:53.069 "is_configured": true, 00:22:53.069 "data_offset": 2048, 00:22:53.069 "data_size": 63488 00:22:53.069 } 00:22:53.069 ] 00:22:53.069 }' 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.069 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.329 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.329 [2024-12-09 23:04:28.672374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.595 "name": "Existed_Raid", 00:22:53.595 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:53.595 "strip_size_kb": 0, 00:22:53.595 "state": "configuring", 00:22:53.595 "raid_level": "raid1", 00:22:53.595 "superblock": true, 00:22:53.595 "num_base_bdevs": 4, 00:22:53.595 "num_base_bdevs_discovered": 2, 00:22:53.595 "num_base_bdevs_operational": 4, 00:22:53.595 "base_bdevs_list": [ 00:22:53.595 { 00:22:53.595 "name": null, 00:22:53.595 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:53.595 "is_configured": false, 00:22:53.595 "data_offset": 0, 00:22:53.595 "data_size": 63488 00:22:53.595 }, 00:22:53.595 { 00:22:53.595 "name": null, 00:22:53.595 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:53.595 "is_configured": false, 00:22:53.595 "data_offset": 0, 00:22:53.595 "data_size": 63488 00:22:53.595 }, 00:22:53.595 { 00:22:53.595 "name": "BaseBdev3", 00:22:53.595 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:53.595 "is_configured": true, 00:22:53.595 "data_offset": 2048, 00:22:53.595 "data_size": 63488 00:22:53.595 }, 00:22:53.595 { 00:22:53.595 "name": "BaseBdev4", 00:22:53.595 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:53.595 "is_configured": true, 00:22:53.595 "data_offset": 2048, 00:22:53.595 "data_size": 63488 00:22:53.595 } 00:22:53.595 ] 00:22:53.595 }' 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.595 23:04:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.859 [2024-12-09 23:04:29.124040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.859 "name": "Existed_Raid", 00:22:53.859 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:53.859 "strip_size_kb": 0, 00:22:53.859 "state": "configuring", 00:22:53.859 "raid_level": "raid1", 00:22:53.859 "superblock": true, 00:22:53.859 "num_base_bdevs": 4, 00:22:53.859 "num_base_bdevs_discovered": 3, 00:22:53.859 "num_base_bdevs_operational": 4, 00:22:53.859 "base_bdevs_list": [ 00:22:53.859 { 00:22:53.859 "name": null, 00:22:53.859 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:53.859 "is_configured": false, 00:22:53.859 "data_offset": 0, 00:22:53.859 "data_size": 63488 00:22:53.859 }, 00:22:53.859 { 00:22:53.859 "name": "BaseBdev2", 00:22:53.859 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:53.859 "is_configured": true, 00:22:53.859 "data_offset": 2048, 00:22:53.859 "data_size": 63488 00:22:53.859 }, 00:22:53.859 { 00:22:53.859 "name": "BaseBdev3", 00:22:53.859 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:53.859 "is_configured": true, 00:22:53.859 "data_offset": 2048, 00:22:53.859 "data_size": 63488 00:22:53.859 }, 00:22:53.859 { 00:22:53.859 "name": "BaseBdev4", 00:22:53.859 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:53.859 "is_configured": true, 00:22:53.859 "data_offset": 2048, 00:22:53.859 "data_size": 63488 00:22:53.859 } 00:22:53.859 ] 00:22:53.859 }' 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.859 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.119 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.119 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.119 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.119 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:54.119 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.119 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f328074b-6aed-45ed-a517-7b6818464ed0 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.381 [2024-12-09 23:04:29.538533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:54.381 [2024-12-09 23:04:29.538743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:54.381 [2024-12-09 23:04:29.538762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:54.381 [2024-12-09 23:04:29.539006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:54.381 [2024-12-09 23:04:29.539160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:54.381 [2024-12-09 23:04:29.539169] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:54.381 NewBaseBdev 00:22:54.381 [2024-12-09 23:04:29.539286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.381 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.381 [ 00:22:54.381 { 00:22:54.381 "name": "NewBaseBdev", 00:22:54.382 "aliases": [ 00:22:54.382 "f328074b-6aed-45ed-a517-7b6818464ed0" 00:22:54.382 ], 00:22:54.382 "product_name": "Malloc disk", 00:22:54.382 "block_size": 512, 00:22:54.382 "num_blocks": 65536, 00:22:54.382 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:54.382 "assigned_rate_limits": { 00:22:54.382 "rw_ios_per_sec": 0, 00:22:54.382 "rw_mbytes_per_sec": 0, 00:22:54.382 "r_mbytes_per_sec": 0, 00:22:54.382 "w_mbytes_per_sec": 0 00:22:54.382 }, 00:22:54.382 "claimed": true, 00:22:54.382 "claim_type": "exclusive_write", 00:22:54.382 "zoned": false, 00:22:54.382 "supported_io_types": { 00:22:54.382 "read": true, 00:22:54.382 "write": true, 00:22:54.382 "unmap": true, 00:22:54.382 "flush": true, 00:22:54.382 "reset": true, 00:22:54.382 "nvme_admin": false, 00:22:54.382 "nvme_io": false, 00:22:54.382 "nvme_io_md": false, 00:22:54.382 "write_zeroes": true, 00:22:54.382 "zcopy": true, 00:22:54.382 "get_zone_info": false, 00:22:54.382 "zone_management": false, 00:22:54.382 "zone_append": false, 00:22:54.382 "compare": false, 00:22:54.382 "compare_and_write": false, 00:22:54.382 "abort": true, 00:22:54.382 "seek_hole": false, 00:22:54.382 "seek_data": false, 00:22:54.382 "copy": true, 00:22:54.382 "nvme_iov_md": false 00:22:54.382 }, 00:22:54.382 "memory_domains": [ 00:22:54.382 { 00:22:54.382 "dma_device_id": "system", 00:22:54.382 "dma_device_type": 1 00:22:54.382 }, 00:22:54.382 { 00:22:54.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.382 "dma_device_type": 2 00:22:54.382 } 00:22:54.382 ], 00:22:54.382 "driver_specific": {} 00:22:54.382 } 00:22:54.382 ] 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.382 "name": "Existed_Raid", 00:22:54.382 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:54.382 "strip_size_kb": 0, 00:22:54.382 "state": "online", 00:22:54.382 "raid_level": "raid1", 00:22:54.382 "superblock": true, 00:22:54.382 "num_base_bdevs": 4, 00:22:54.382 "num_base_bdevs_discovered": 4, 00:22:54.382 "num_base_bdevs_operational": 4, 00:22:54.382 "base_bdevs_list": [ 00:22:54.382 { 00:22:54.382 "name": "NewBaseBdev", 00:22:54.382 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:54.382 "is_configured": true, 00:22:54.382 "data_offset": 2048, 00:22:54.382 "data_size": 63488 00:22:54.382 }, 00:22:54.382 { 00:22:54.382 "name": "BaseBdev2", 00:22:54.382 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:54.382 "is_configured": true, 00:22:54.382 "data_offset": 2048, 00:22:54.382 "data_size": 63488 00:22:54.382 }, 00:22:54.382 { 00:22:54.382 "name": "BaseBdev3", 00:22:54.382 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:54.382 "is_configured": true, 00:22:54.382 "data_offset": 2048, 00:22:54.382 "data_size": 63488 00:22:54.382 }, 00:22:54.382 { 00:22:54.382 "name": "BaseBdev4", 00:22:54.382 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:54.382 "is_configured": true, 00:22:54.382 "data_offset": 2048, 00:22:54.382 "data_size": 63488 00:22:54.382 } 00:22:54.382 ] 00:22:54.382 }' 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.382 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.643 [2024-12-09 23:04:29.891011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.643 "name": "Existed_Raid", 00:22:54.643 "aliases": [ 00:22:54.643 "6a6c3690-0a6c-4a99-8ac4-780939f29d1a" 00:22:54.643 ], 00:22:54.643 "product_name": "Raid Volume", 00:22:54.643 "block_size": 512, 00:22:54.643 "num_blocks": 63488, 00:22:54.643 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:54.643 "assigned_rate_limits": { 00:22:54.643 "rw_ios_per_sec": 0, 00:22:54.643 "rw_mbytes_per_sec": 0, 00:22:54.643 "r_mbytes_per_sec": 0, 00:22:54.643 "w_mbytes_per_sec": 0 00:22:54.643 }, 00:22:54.643 "claimed": false, 00:22:54.643 "zoned": false, 00:22:54.643 "supported_io_types": { 00:22:54.643 "read": true, 00:22:54.643 "write": true, 00:22:54.643 "unmap": false, 00:22:54.643 "flush": false, 00:22:54.643 "reset": true, 00:22:54.643 "nvme_admin": false, 00:22:54.643 "nvme_io": false, 00:22:54.643 "nvme_io_md": false, 00:22:54.643 "write_zeroes": true, 00:22:54.643 "zcopy": false, 00:22:54.643 "get_zone_info": false, 00:22:54.643 "zone_management": false, 00:22:54.643 "zone_append": false, 00:22:54.643 "compare": false, 00:22:54.643 "compare_and_write": false, 00:22:54.643 "abort": false, 00:22:54.643 "seek_hole": false, 00:22:54.643 "seek_data": false, 00:22:54.643 "copy": false, 00:22:54.643 "nvme_iov_md": false 00:22:54.643 }, 00:22:54.643 "memory_domains": [ 00:22:54.643 { 00:22:54.643 "dma_device_id": "system", 00:22:54.643 "dma_device_type": 1 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.643 "dma_device_type": 2 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "system", 00:22:54.643 "dma_device_type": 1 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.643 "dma_device_type": 2 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "system", 00:22:54.643 "dma_device_type": 1 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.643 "dma_device_type": 2 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "system", 00:22:54.643 "dma_device_type": 1 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.643 "dma_device_type": 2 00:22:54.643 } 00:22:54.643 ], 00:22:54.643 "driver_specific": { 00:22:54.643 "raid": { 00:22:54.643 "uuid": "6a6c3690-0a6c-4a99-8ac4-780939f29d1a", 00:22:54.643 "strip_size_kb": 0, 00:22:54.643 "state": "online", 00:22:54.643 "raid_level": "raid1", 00:22:54.643 "superblock": true, 00:22:54.643 "num_base_bdevs": 4, 00:22:54.643 "num_base_bdevs_discovered": 4, 00:22:54.643 "num_base_bdevs_operational": 4, 00:22:54.643 "base_bdevs_list": [ 00:22:54.643 { 00:22:54.643 "name": "NewBaseBdev", 00:22:54.643 "uuid": "f328074b-6aed-45ed-a517-7b6818464ed0", 00:22:54.643 "is_configured": true, 00:22:54.643 "data_offset": 2048, 00:22:54.643 "data_size": 63488 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "name": "BaseBdev2", 00:22:54.643 "uuid": "58037ff5-4af4-41dd-a0a3-59ea0fb3df9c", 00:22:54.643 "is_configured": true, 00:22:54.643 "data_offset": 2048, 00:22:54.643 "data_size": 63488 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "name": "BaseBdev3", 00:22:54.643 "uuid": "ca3d61b1-2f58-411d-bc2b-f7cf7406cb1b", 00:22:54.643 "is_configured": true, 00:22:54.643 "data_offset": 2048, 00:22:54.643 "data_size": 63488 00:22:54.643 }, 00:22:54.643 { 00:22:54.643 "name": "BaseBdev4", 00:22:54.643 "uuid": "fbcb77c2-3ae3-4aa1-b3f3-b4e50aa437f1", 00:22:54.643 "is_configured": true, 00:22:54.643 "data_offset": 2048, 00:22:54.643 "data_size": 63488 00:22:54.643 } 00:22:54.643 ] 00:22:54.643 } 00:22:54.643 } 00:22:54.643 }' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:54.643 BaseBdev2 00:22:54.643 BaseBdev3 00:22:54.643 BaseBdev4' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.643 23:04:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.904 [2024-12-09 23:04:30.122706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:54.904 [2024-12-09 23:04:30.122735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.904 [2024-12-09 23:04:30.122804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.904 [2024-12-09 23:04:30.123088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.904 [2024-12-09 23:04:30.123119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71954 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71954 ']' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71954 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71954 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.904 killing process with pid 71954 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71954' 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71954 00:22:54.904 [2024-12-09 23:04:30.152332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.904 23:04:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71954 00:22:55.165 [2024-12-09 23:04:30.400298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:56.107 23:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:56.107 00:22:56.107 real 0m8.674s 00:22:56.107 user 0m13.681s 00:22:56.107 sys 0m1.554s 00:22:56.107 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.107 23:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.107 ************************************ 00:22:56.107 END TEST raid_state_function_test_sb 00:22:56.107 ************************************ 00:22:56.107 23:04:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:56.107 23:04:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:56.107 23:04:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.107 23:04:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:56.107 ************************************ 00:22:56.107 START TEST raid_superblock_test 00:22:56.107 ************************************ 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72596 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72596 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72596 ']' 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.107 23:04:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.107 [2024-12-09 23:04:31.231565] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:22:56.107 [2024-12-09 23:04:31.231698] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:22:56.107 [2024-12-09 23:04:31.392057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.368 [2024-12-09 23:04:31.494567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.368 [2024-12-09 23:04:31.631494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.368 [2024-12-09 23:04:31.631552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 malloc1 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 [2024-12-09 23:04:32.182656] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:56.938 [2024-12-09 23:04:32.182724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.938 [2024-12-09 23:04:32.182747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:56.938 [2024-12-09 23:04:32.182758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.938 [2024-12-09 23:04:32.184960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.938 [2024-12-09 23:04:32.185003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:56.938 pt1 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 malloc2 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 [2024-12-09 23:04:32.224470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:56.938 [2024-12-09 23:04:32.224551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.938 [2024-12-09 23:04:32.224584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:56.938 [2024-12-09 23:04:32.224596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.938 [2024-12-09 23:04:32.227782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.938 [2024-12-09 23:04:32.227836] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:56.938 pt2 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 malloc3 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.938 [2024-12-09 23:04:32.282903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:56.938 [2024-12-09 23:04:32.282975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.938 [2024-12-09 23:04:32.282998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:56.938 [2024-12-09 23:04:32.283007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.938 [2024-12-09 23:04:32.285226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.938 [2024-12-09 23:04:32.285262] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:56.938 pt3 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:56.938 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:56.939 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.939 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.198 malloc4 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.198 [2024-12-09 23:04:32.319617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:57.198 [2024-12-09 23:04:32.319679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.198 [2024-12-09 23:04:32.319697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:57.198 [2024-12-09 23:04:32.319705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.198 [2024-12-09 23:04:32.321888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.198 [2024-12-09 23:04:32.321926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:57.198 pt4 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.198 [2024-12-09 23:04:32.327655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:57.198 [2024-12-09 23:04:32.329556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:57.198 [2024-12-09 23:04:32.329628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:57.198 [2024-12-09 23:04:32.329691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:57.198 [2024-12-09 23:04:32.329887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:57.198 [2024-12-09 23:04:32.329908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:57.198 [2024-12-09 23:04:32.330201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:57.198 [2024-12-09 23:04:32.330366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:57.198 [2024-12-09 23:04:32.330385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:57.198 [2024-12-09 23:04:32.330536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.198 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.198 "name": "raid_bdev1", 00:22:57.198 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:57.198 "strip_size_kb": 0, 00:22:57.198 "state": "online", 00:22:57.198 "raid_level": "raid1", 00:22:57.198 "superblock": true, 00:22:57.198 "num_base_bdevs": 4, 00:22:57.198 "num_base_bdevs_discovered": 4, 00:22:57.198 "num_base_bdevs_operational": 4, 00:22:57.198 "base_bdevs_list": [ 00:22:57.198 { 00:22:57.198 "name": "pt1", 00:22:57.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.198 "is_configured": true, 00:22:57.198 "data_offset": 2048, 00:22:57.198 "data_size": 63488 00:22:57.198 }, 00:22:57.198 { 00:22:57.198 "name": "pt2", 00:22:57.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.198 "is_configured": true, 00:22:57.198 "data_offset": 2048, 00:22:57.198 "data_size": 63488 00:22:57.198 }, 00:22:57.198 { 00:22:57.198 "name": "pt3", 00:22:57.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.198 "is_configured": true, 00:22:57.198 "data_offset": 2048, 00:22:57.198 "data_size": 63488 00:22:57.198 }, 00:22:57.198 { 00:22:57.198 "name": "pt4", 00:22:57.198 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.198 "is_configured": true, 00:22:57.199 "data_offset": 2048, 00:22:57.199 "data_size": 63488 00:22:57.199 } 00:22:57.199 ] 00:22:57.199 }' 00:22:57.199 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.199 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:57.459 [2024-12-09 23:04:32.660043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.459 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.459 "name": "raid_bdev1", 00:22:57.459 "aliases": [ 00:22:57.459 "38914642-6a49-4bb1-9c89-227cdfdbb9cf" 00:22:57.459 ], 00:22:57.459 "product_name": "Raid Volume", 00:22:57.459 "block_size": 512, 00:22:57.459 "num_blocks": 63488, 00:22:57.459 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:57.459 "assigned_rate_limits": { 00:22:57.459 "rw_ios_per_sec": 0, 00:22:57.459 "rw_mbytes_per_sec": 0, 00:22:57.459 "r_mbytes_per_sec": 0, 00:22:57.459 "w_mbytes_per_sec": 0 00:22:57.459 }, 00:22:57.459 "claimed": false, 00:22:57.459 "zoned": false, 00:22:57.459 "supported_io_types": { 00:22:57.459 "read": true, 00:22:57.459 "write": true, 00:22:57.459 "unmap": false, 00:22:57.459 "flush": false, 00:22:57.459 "reset": true, 00:22:57.459 "nvme_admin": false, 00:22:57.459 "nvme_io": false, 00:22:57.459 "nvme_io_md": false, 00:22:57.459 "write_zeroes": true, 00:22:57.459 "zcopy": false, 00:22:57.459 "get_zone_info": false, 00:22:57.459 "zone_management": false, 00:22:57.459 "zone_append": false, 00:22:57.459 "compare": false, 00:22:57.459 "compare_and_write": false, 00:22:57.459 "abort": false, 00:22:57.459 "seek_hole": false, 00:22:57.459 "seek_data": false, 00:22:57.459 "copy": false, 00:22:57.459 "nvme_iov_md": false 00:22:57.459 }, 00:22:57.459 "memory_domains": [ 00:22:57.459 { 00:22:57.459 "dma_device_id": "system", 00:22:57.459 "dma_device_type": 1 00:22:57.459 }, 00:22:57.459 { 00:22:57.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.459 "dma_device_type": 2 00:22:57.459 }, 00:22:57.459 { 00:22:57.460 "dma_device_id": "system", 00:22:57.460 "dma_device_type": 1 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.460 "dma_device_type": 2 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "dma_device_id": "system", 00:22:57.460 "dma_device_type": 1 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.460 "dma_device_type": 2 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "dma_device_id": "system", 00:22:57.460 "dma_device_type": 1 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.460 "dma_device_type": 2 00:22:57.460 } 00:22:57.460 ], 00:22:57.460 "driver_specific": { 00:22:57.460 "raid": { 00:22:57.460 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:57.460 "strip_size_kb": 0, 00:22:57.460 "state": "online", 00:22:57.460 "raid_level": "raid1", 00:22:57.460 "superblock": true, 00:22:57.460 "num_base_bdevs": 4, 00:22:57.460 "num_base_bdevs_discovered": 4, 00:22:57.460 "num_base_bdevs_operational": 4, 00:22:57.460 "base_bdevs_list": [ 00:22:57.460 { 00:22:57.460 "name": "pt1", 00:22:57.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.460 "is_configured": true, 00:22:57.460 "data_offset": 2048, 00:22:57.460 "data_size": 63488 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "name": "pt2", 00:22:57.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.460 "is_configured": true, 00:22:57.460 "data_offset": 2048, 00:22:57.460 "data_size": 63488 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "name": "pt3", 00:22:57.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.460 "is_configured": true, 00:22:57.460 "data_offset": 2048, 00:22:57.460 "data_size": 63488 00:22:57.460 }, 00:22:57.460 { 00:22:57.460 "name": "pt4", 00:22:57.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.460 "is_configured": true, 00:22:57.460 "data_offset": 2048, 00:22:57.460 "data_size": 63488 00:22:57.460 } 00:22:57.460 ] 00:22:57.460 } 00:22:57.460 } 00:22:57.460 }' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:57.460 pt2 00:22:57.460 pt3 00:22:57.460 pt4' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.460 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 [2024-12-09 23:04:32.884081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=38914642-6a49-4bb1-9c89-227cdfdbb9cf 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 38914642-6a49-4bb1-9c89-227cdfdbb9cf ']' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 [2024-12-09 23:04:32.903754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.722 [2024-12-09 23:04:32.903786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:57.722 [2024-12-09 23:04:32.903858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.722 [2024-12-09 23:04:32.903943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.722 [2024-12-09 23:04:32.903958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 23:04:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.722 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.722 [2024-12-09 23:04:33.015804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:57.722 [2024-12-09 23:04:33.017733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:57.722 [2024-12-09 23:04:33.017790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:57.722 [2024-12-09 23:04:33.017827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:57.722 [2024-12-09 23:04:33.017875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:57.722 [2024-12-09 23:04:33.017927] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:57.723 [2024-12-09 23:04:33.017947] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:57.723 [2024-12-09 23:04:33.017965] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:57.723 [2024-12-09 23:04:33.017978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.723 [2024-12-09 23:04:33.017988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:57.723 request: 00:22:57.723 { 00:22:57.723 "name": "raid_bdev1", 00:22:57.723 "raid_level": "raid1", 00:22:57.723 "base_bdevs": [ 00:22:57.723 "malloc1", 00:22:57.723 "malloc2", 00:22:57.723 "malloc3", 00:22:57.723 "malloc4" 00:22:57.723 ], 00:22:57.723 "superblock": false, 00:22:57.723 "method": "bdev_raid_create", 00:22:57.723 "req_id": 1 00:22:57.723 } 00:22:57.723 Got JSON-RPC error response 00:22:57.723 response: 00:22:57.723 { 00:22:57.723 "code": -17, 00:22:57.723 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:57.723 } 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.723 [2024-12-09 23:04:33.059790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:57.723 [2024-12-09 23:04:33.059855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.723 [2024-12-09 23:04:33.059870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:57.723 [2024-12-09 23:04:33.059881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.723 [2024-12-09 23:04:33.062068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.723 [2024-12-09 23:04:33.062127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:57.723 [2024-12-09 23:04:33.062209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:57.723 [2024-12-09 23:04:33.062262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:57.723 pt1 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.723 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.984 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.984 "name": "raid_bdev1", 00:22:57.984 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:57.984 "strip_size_kb": 0, 00:22:57.984 "state": "configuring", 00:22:57.984 "raid_level": "raid1", 00:22:57.984 "superblock": true, 00:22:57.984 "num_base_bdevs": 4, 00:22:57.984 "num_base_bdevs_discovered": 1, 00:22:57.984 "num_base_bdevs_operational": 4, 00:22:57.984 "base_bdevs_list": [ 00:22:57.984 { 00:22:57.984 "name": "pt1", 00:22:57.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.984 "is_configured": true, 00:22:57.984 "data_offset": 2048, 00:22:57.984 "data_size": 63488 00:22:57.984 }, 00:22:57.984 { 00:22:57.984 "name": null, 00:22:57.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.984 "is_configured": false, 00:22:57.984 "data_offset": 2048, 00:22:57.984 "data_size": 63488 00:22:57.984 }, 00:22:57.984 { 00:22:57.984 "name": null, 00:22:57.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.984 "is_configured": false, 00:22:57.984 "data_offset": 2048, 00:22:57.984 "data_size": 63488 00:22:57.984 }, 00:22:57.984 { 00:22:57.984 "name": null, 00:22:57.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.984 "is_configured": false, 00:22:57.984 "data_offset": 2048, 00:22:57.984 "data_size": 63488 00:22:57.984 } 00:22:57.984 ] 00:22:57.984 }' 00:22:57.984 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.984 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.246 [2024-12-09 23:04:33.407890] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:58.246 [2024-12-09 23:04:33.407958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.246 [2024-12-09 23:04:33.407974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:58.246 [2024-12-09 23:04:33.407984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.246 [2024-12-09 23:04:33.408391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.246 [2024-12-09 23:04:33.408415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:58.246 [2024-12-09 23:04:33.408484] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:58.246 [2024-12-09 23:04:33.408505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:58.246 pt2 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.246 [2024-12-09 23:04:33.415909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.246 "name": "raid_bdev1", 00:22:58.246 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:58.246 "strip_size_kb": 0, 00:22:58.246 "state": "configuring", 00:22:58.246 "raid_level": "raid1", 00:22:58.246 "superblock": true, 00:22:58.246 "num_base_bdevs": 4, 00:22:58.246 "num_base_bdevs_discovered": 1, 00:22:58.246 "num_base_bdevs_operational": 4, 00:22:58.246 "base_bdevs_list": [ 00:22:58.246 { 00:22:58.246 "name": "pt1", 00:22:58.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.246 "is_configured": true, 00:22:58.246 "data_offset": 2048, 00:22:58.246 "data_size": 63488 00:22:58.246 }, 00:22:58.246 { 00:22:58.246 "name": null, 00:22:58.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.246 "is_configured": false, 00:22:58.246 "data_offset": 0, 00:22:58.246 "data_size": 63488 00:22:58.246 }, 00:22:58.246 { 00:22:58.246 "name": null, 00:22:58.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.246 "is_configured": false, 00:22:58.246 "data_offset": 2048, 00:22:58.246 "data_size": 63488 00:22:58.246 }, 00:22:58.246 { 00:22:58.246 "name": null, 00:22:58.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.246 "is_configured": false, 00:22:58.246 "data_offset": 2048, 00:22:58.246 "data_size": 63488 00:22:58.246 } 00:22:58.246 ] 00:22:58.246 }' 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.246 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.508 [2024-12-09 23:04:33.755982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:58.508 [2024-12-09 23:04:33.756055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.508 [2024-12-09 23:04:33.756074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:58.508 [2024-12-09 23:04:33.756083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.508 [2024-12-09 23:04:33.756577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.508 [2024-12-09 23:04:33.756615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:58.508 [2024-12-09 23:04:33.756707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:58.508 [2024-12-09 23:04:33.756745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:58.508 pt2 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.508 [2024-12-09 23:04:33.763971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:58.508 [2024-12-09 23:04:33.764158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.508 [2024-12-09 23:04:33.764257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:58.508 [2024-12-09 23:04:33.764276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.508 [2024-12-09 23:04:33.764709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.508 [2024-12-09 23:04:33.764733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:58.508 [2024-12-09 23:04:33.764805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:58.508 [2024-12-09 23:04:33.764825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:58.508 pt3 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.508 [2024-12-09 23:04:33.771950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:58.508 [2024-12-09 23:04:33.772112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.508 [2024-12-09 23:04:33.772204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:58.508 [2024-12-09 23:04:33.772286] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.508 [2024-12-09 23:04:33.772799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.508 [2024-12-09 23:04:33.772912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:58.508 [2024-12-09 23:04:33.773136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:58.508 [2024-12-09 23:04:33.773235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:58.508 [2024-12-09 23:04:33.773484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:58.508 [2024-12-09 23:04:33.773560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:58.508 [2024-12-09 23:04:33.773904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:58.508 [2024-12-09 23:04:33.774157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:58.508 [2024-12-09 23:04:33.774243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:58.508 [2024-12-09 23:04:33.774500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.508 pt4 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.508 "name": "raid_bdev1", 00:22:58.508 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:58.508 "strip_size_kb": 0, 00:22:58.508 "state": "online", 00:22:58.508 "raid_level": "raid1", 00:22:58.508 "superblock": true, 00:22:58.508 "num_base_bdevs": 4, 00:22:58.508 "num_base_bdevs_discovered": 4, 00:22:58.508 "num_base_bdevs_operational": 4, 00:22:58.508 "base_bdevs_list": [ 00:22:58.508 { 00:22:58.508 "name": "pt1", 00:22:58.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.508 "is_configured": true, 00:22:58.508 "data_offset": 2048, 00:22:58.508 "data_size": 63488 00:22:58.508 }, 00:22:58.508 { 00:22:58.508 "name": "pt2", 00:22:58.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.508 "is_configured": true, 00:22:58.508 "data_offset": 2048, 00:22:58.508 "data_size": 63488 00:22:58.508 }, 00:22:58.508 { 00:22:58.508 "name": "pt3", 00:22:58.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.508 "is_configured": true, 00:22:58.508 "data_offset": 2048, 00:22:58.508 "data_size": 63488 00:22:58.508 }, 00:22:58.508 { 00:22:58.508 "name": "pt4", 00:22:58.508 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.508 "is_configured": true, 00:22:58.508 "data_offset": 2048, 00:22:58.508 "data_size": 63488 00:22:58.508 } 00:22:58.508 ] 00:22:58.508 }' 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.508 23:04:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.768 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.768 [2024-12-09 23:04:34.116433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.028 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.028 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.028 "name": "raid_bdev1", 00:22:59.028 "aliases": [ 00:22:59.028 "38914642-6a49-4bb1-9c89-227cdfdbb9cf" 00:22:59.028 ], 00:22:59.028 "product_name": "Raid Volume", 00:22:59.028 "block_size": 512, 00:22:59.028 "num_blocks": 63488, 00:22:59.028 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:59.028 "assigned_rate_limits": { 00:22:59.028 "rw_ios_per_sec": 0, 00:22:59.028 "rw_mbytes_per_sec": 0, 00:22:59.028 "r_mbytes_per_sec": 0, 00:22:59.028 "w_mbytes_per_sec": 0 00:22:59.028 }, 00:22:59.028 "claimed": false, 00:22:59.028 "zoned": false, 00:22:59.028 "supported_io_types": { 00:22:59.028 "read": true, 00:22:59.028 "write": true, 00:22:59.028 "unmap": false, 00:22:59.028 "flush": false, 00:22:59.028 "reset": true, 00:22:59.028 "nvme_admin": false, 00:22:59.028 "nvme_io": false, 00:22:59.028 "nvme_io_md": false, 00:22:59.028 "write_zeroes": true, 00:22:59.028 "zcopy": false, 00:22:59.028 "get_zone_info": false, 00:22:59.028 "zone_management": false, 00:22:59.028 "zone_append": false, 00:22:59.028 "compare": false, 00:22:59.028 "compare_and_write": false, 00:22:59.028 "abort": false, 00:22:59.028 "seek_hole": false, 00:22:59.028 "seek_data": false, 00:22:59.028 "copy": false, 00:22:59.028 "nvme_iov_md": false 00:22:59.028 }, 00:22:59.028 "memory_domains": [ 00:22:59.028 { 00:22:59.028 "dma_device_id": "system", 00:22:59.028 "dma_device_type": 1 00:22:59.028 }, 00:22:59.028 { 00:22:59.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.028 "dma_device_type": 2 00:22:59.028 }, 00:22:59.028 { 00:22:59.028 "dma_device_id": "system", 00:22:59.028 "dma_device_type": 1 00:22:59.028 }, 00:22:59.028 { 00:22:59.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.028 "dma_device_type": 2 00:22:59.028 }, 00:22:59.028 { 00:22:59.029 "dma_device_id": "system", 00:22:59.029 "dma_device_type": 1 00:22:59.029 }, 00:22:59.029 { 00:22:59.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.029 "dma_device_type": 2 00:22:59.029 }, 00:22:59.029 { 00:22:59.029 "dma_device_id": "system", 00:22:59.029 "dma_device_type": 1 00:22:59.029 }, 00:22:59.029 { 00:22:59.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.029 "dma_device_type": 2 00:22:59.029 } 00:22:59.029 ], 00:22:59.029 "driver_specific": { 00:22:59.029 "raid": { 00:22:59.029 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:59.029 "strip_size_kb": 0, 00:22:59.029 "state": "online", 00:22:59.029 "raid_level": "raid1", 00:22:59.029 "superblock": true, 00:22:59.029 "num_base_bdevs": 4, 00:22:59.029 "num_base_bdevs_discovered": 4, 00:22:59.029 "num_base_bdevs_operational": 4, 00:22:59.029 "base_bdevs_list": [ 00:22:59.029 { 00:22:59.029 "name": "pt1", 00:22:59.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.029 "is_configured": true, 00:22:59.029 "data_offset": 2048, 00:22:59.029 "data_size": 63488 00:22:59.029 }, 00:22:59.029 { 00:22:59.029 "name": "pt2", 00:22:59.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.029 "is_configured": true, 00:22:59.029 "data_offset": 2048, 00:22:59.029 "data_size": 63488 00:22:59.029 }, 00:22:59.029 { 00:22:59.029 "name": "pt3", 00:22:59.029 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.029 "is_configured": true, 00:22:59.029 "data_offset": 2048, 00:22:59.029 "data_size": 63488 00:22:59.029 }, 00:22:59.029 { 00:22:59.029 "name": "pt4", 00:22:59.029 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.029 "is_configured": true, 00:22:59.029 "data_offset": 2048, 00:22:59.029 "data_size": 63488 00:22:59.029 } 00:22:59.029 ] 00:22:59.029 } 00:22:59.029 } 00:22:59.029 }' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:59.029 pt2 00:22:59.029 pt3 00:22:59.029 pt4' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.029 [2024-12-09 23:04:34.372454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.029 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 38914642-6a49-4bb1-9c89-227cdfdbb9cf '!=' 38914642-6a49-4bb1-9c89-227cdfdbb9cf ']' 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.289 [2024-12-09 23:04:34.408213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.289 "name": "raid_bdev1", 00:22:59.289 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:59.289 "strip_size_kb": 0, 00:22:59.289 "state": "online", 00:22:59.289 "raid_level": "raid1", 00:22:59.289 "superblock": true, 00:22:59.289 "num_base_bdevs": 4, 00:22:59.289 "num_base_bdevs_discovered": 3, 00:22:59.289 "num_base_bdevs_operational": 3, 00:22:59.289 "base_bdevs_list": [ 00:22:59.289 { 00:22:59.289 "name": null, 00:22:59.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.289 "is_configured": false, 00:22:59.289 "data_offset": 0, 00:22:59.289 "data_size": 63488 00:22:59.289 }, 00:22:59.289 { 00:22:59.289 "name": "pt2", 00:22:59.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.289 "is_configured": true, 00:22:59.289 "data_offset": 2048, 00:22:59.289 "data_size": 63488 00:22:59.289 }, 00:22:59.289 { 00:22:59.289 "name": "pt3", 00:22:59.289 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.289 "is_configured": true, 00:22:59.289 "data_offset": 2048, 00:22:59.289 "data_size": 63488 00:22:59.289 }, 00:22:59.289 { 00:22:59.289 "name": "pt4", 00:22:59.289 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.289 "is_configured": true, 00:22:59.289 "data_offset": 2048, 00:22:59.289 "data_size": 63488 00:22:59.289 } 00:22:59.289 ] 00:22:59.289 }' 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.289 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.549 [2024-12-09 23:04:34.720202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.549 [2024-12-09 23:04:34.720236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.549 [2024-12-09 23:04:34.720309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.549 [2024-12-09 23:04:34.720387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.549 [2024-12-09 23:04:34.720397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:59.549 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.550 [2024-12-09 23:04:34.784227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.550 [2024-12-09 23:04:34.784374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.550 [2024-12-09 23:04:34.784414] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:59.550 [2024-12-09 23:04:34.784465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.550 [2024-12-09 23:04:34.786696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.550 [2024-12-09 23:04:34.786822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.550 [2024-12-09 23:04:34.786912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:59.550 [2024-12-09 23:04:34.786954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.550 pt2 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.550 "name": "raid_bdev1", 00:22:59.550 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:59.550 "strip_size_kb": 0, 00:22:59.550 "state": "configuring", 00:22:59.550 "raid_level": "raid1", 00:22:59.550 "superblock": true, 00:22:59.550 "num_base_bdevs": 4, 00:22:59.550 "num_base_bdevs_discovered": 1, 00:22:59.550 "num_base_bdevs_operational": 3, 00:22:59.550 "base_bdevs_list": [ 00:22:59.550 { 00:22:59.550 "name": null, 00:22:59.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.550 "is_configured": false, 00:22:59.550 "data_offset": 2048, 00:22:59.550 "data_size": 63488 00:22:59.550 }, 00:22:59.550 { 00:22:59.550 "name": "pt2", 00:22:59.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.550 "is_configured": true, 00:22:59.550 "data_offset": 2048, 00:22:59.550 "data_size": 63488 00:22:59.550 }, 00:22:59.550 { 00:22:59.550 "name": null, 00:22:59.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.550 "is_configured": false, 00:22:59.550 "data_offset": 2048, 00:22:59.550 "data_size": 63488 00:22:59.550 }, 00:22:59.550 { 00:22:59.550 "name": null, 00:22:59.550 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.550 "is_configured": false, 00:22:59.550 "data_offset": 2048, 00:22:59.550 "data_size": 63488 00:22:59.550 } 00:22:59.550 ] 00:22:59.550 }' 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.550 23:04:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.811 [2024-12-09 23:04:35.112391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:59.811 [2024-12-09 23:04:35.112538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.811 [2024-12-09 23:04:35.112576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:59.811 [2024-12-09 23:04:35.112779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.811 [2024-12-09 23:04:35.113220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.811 [2024-12-09 23:04:35.113243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:59.811 [2024-12-09 23:04:35.113315] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:59.811 [2024-12-09 23:04:35.113334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:59.811 pt3 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.811 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.811 "name": "raid_bdev1", 00:22:59.811 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:22:59.811 "strip_size_kb": 0, 00:22:59.811 "state": "configuring", 00:22:59.811 "raid_level": "raid1", 00:22:59.811 "superblock": true, 00:22:59.811 "num_base_bdevs": 4, 00:22:59.811 "num_base_bdevs_discovered": 2, 00:22:59.811 "num_base_bdevs_operational": 3, 00:22:59.811 "base_bdevs_list": [ 00:22:59.811 { 00:22:59.811 "name": null, 00:22:59.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.811 "is_configured": false, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 }, 00:22:59.811 { 00:22:59.811 "name": "pt2", 00:22:59.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.811 "is_configured": true, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 }, 00:22:59.811 { 00:22:59.811 "name": "pt3", 00:22:59.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.811 "is_configured": true, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 }, 00:22:59.811 { 00:22:59.811 "name": null, 00:22:59.811 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.811 "is_configured": false, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 } 00:22:59.811 ] 00:22:59.811 }' 00:22:59.812 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.812 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.401 [2024-12-09 23:04:35.444486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:00.401 [2024-12-09 23:04:35.444663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.401 [2024-12-09 23:04:35.444706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:00.401 [2024-12-09 23:04:35.444764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.401 [2024-12-09 23:04:35.445194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.401 [2024-12-09 23:04:35.445286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:00.401 [2024-12-09 23:04:35.445379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:00.401 [2024-12-09 23:04:35.445486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:00.401 [2024-12-09 23:04:35.445671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:00.401 [2024-12-09 23:04:35.445696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:00.401 [2024-12-09 23:04:35.445946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:00.401 [2024-12-09 23:04:35.446111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:00.401 [2024-12-09 23:04:35.446216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:00.401 [2024-12-09 23:04:35.446345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.401 pt4 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.401 "name": "raid_bdev1", 00:23:00.401 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:23:00.401 "strip_size_kb": 0, 00:23:00.401 "state": "online", 00:23:00.401 "raid_level": "raid1", 00:23:00.401 "superblock": true, 00:23:00.401 "num_base_bdevs": 4, 00:23:00.401 "num_base_bdevs_discovered": 3, 00:23:00.401 "num_base_bdevs_operational": 3, 00:23:00.401 "base_bdevs_list": [ 00:23:00.401 { 00:23:00.401 "name": null, 00:23:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.401 "is_configured": false, 00:23:00.401 "data_offset": 2048, 00:23:00.401 "data_size": 63488 00:23:00.401 }, 00:23:00.401 { 00:23:00.401 "name": "pt2", 00:23:00.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.401 "is_configured": true, 00:23:00.401 "data_offset": 2048, 00:23:00.401 "data_size": 63488 00:23:00.401 }, 00:23:00.401 { 00:23:00.401 "name": "pt3", 00:23:00.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.401 "is_configured": true, 00:23:00.401 "data_offset": 2048, 00:23:00.401 "data_size": 63488 00:23:00.401 }, 00:23:00.401 { 00:23:00.401 "name": "pt4", 00:23:00.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.401 "is_configured": true, 00:23:00.401 "data_offset": 2048, 00:23:00.401 "data_size": 63488 00:23:00.401 } 00:23:00.401 ] 00:23:00.401 }' 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.401 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.661 [2024-12-09 23:04:35.796524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:00.661 [2024-12-09 23:04:35.796832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.661 [2024-12-09 23:04:35.796906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.661 [2024-12-09 23:04:35.796979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.661 [2024-12-09 23:04:35.796991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.661 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.661 [2024-12-09 23:04:35.856547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:00.662 [2024-12-09 23:04:35.856706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.662 [2024-12-09 23:04:35.856741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:00.662 [2024-12-09 23:04:35.856795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.662 [2024-12-09 23:04:35.859023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.662 [2024-12-09 23:04:35.859169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:00.662 [2024-12-09 23:04:35.859298] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:00.662 [2024-12-09 23:04:35.859360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:00.662 [2024-12-09 23:04:35.859545] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:00.662 [2024-12-09 23:04:35.859662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:00.662 [2024-12-09 23:04:35.859711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:00.662 [2024-12-09 23:04:35.859797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:00.662 [2024-12-09 23:04:35.859917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:00.662 pt1 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.662 "name": "raid_bdev1", 00:23:00.662 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:23:00.662 "strip_size_kb": 0, 00:23:00.662 "state": "configuring", 00:23:00.662 "raid_level": "raid1", 00:23:00.662 "superblock": true, 00:23:00.662 "num_base_bdevs": 4, 00:23:00.662 "num_base_bdevs_discovered": 2, 00:23:00.662 "num_base_bdevs_operational": 3, 00:23:00.662 "base_bdevs_list": [ 00:23:00.662 { 00:23:00.662 "name": null, 00:23:00.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.662 "is_configured": false, 00:23:00.662 "data_offset": 2048, 00:23:00.662 "data_size": 63488 00:23:00.662 }, 00:23:00.662 { 00:23:00.662 "name": "pt2", 00:23:00.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.662 "is_configured": true, 00:23:00.662 "data_offset": 2048, 00:23:00.662 "data_size": 63488 00:23:00.662 }, 00:23:00.662 { 00:23:00.662 "name": "pt3", 00:23:00.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.662 "is_configured": true, 00:23:00.662 "data_offset": 2048, 00:23:00.662 "data_size": 63488 00:23:00.662 }, 00:23:00.662 { 00:23:00.662 "name": null, 00:23:00.662 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.662 "is_configured": false, 00:23:00.662 "data_offset": 2048, 00:23:00.662 "data_size": 63488 00:23:00.662 } 00:23:00.662 ] 00:23:00.662 }' 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.662 23:04:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.923 [2024-12-09 23:04:36.220666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:00.923 [2024-12-09 23:04:36.220838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.923 [2024-12-09 23:04:36.220866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:00.923 [2024-12-09 23:04:36.220876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.923 [2024-12-09 23:04:36.221292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.923 [2024-12-09 23:04:36.221313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:00.923 [2024-12-09 23:04:36.221388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:00.923 [2024-12-09 23:04:36.221407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:00.923 [2024-12-09 23:04:36.221525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:00.923 [2024-12-09 23:04:36.221534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:00.923 [2024-12-09 23:04:36.221779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:00.923 [2024-12-09 23:04:36.221913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:00.923 [2024-12-09 23:04:36.221923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:00.923 [2024-12-09 23:04:36.222047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.923 pt4 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.923 "name": "raid_bdev1", 00:23:00.923 "uuid": "38914642-6a49-4bb1-9c89-227cdfdbb9cf", 00:23:00.923 "strip_size_kb": 0, 00:23:00.923 "state": "online", 00:23:00.923 "raid_level": "raid1", 00:23:00.923 "superblock": true, 00:23:00.923 "num_base_bdevs": 4, 00:23:00.923 "num_base_bdevs_discovered": 3, 00:23:00.923 "num_base_bdevs_operational": 3, 00:23:00.923 "base_bdevs_list": [ 00:23:00.923 { 00:23:00.923 "name": null, 00:23:00.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.923 "is_configured": false, 00:23:00.923 "data_offset": 2048, 00:23:00.923 "data_size": 63488 00:23:00.923 }, 00:23:00.923 { 00:23:00.923 "name": "pt2", 00:23:00.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.923 "is_configured": true, 00:23:00.923 "data_offset": 2048, 00:23:00.923 "data_size": 63488 00:23:00.923 }, 00:23:00.923 { 00:23:00.923 "name": "pt3", 00:23:00.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.923 "is_configured": true, 00:23:00.923 "data_offset": 2048, 00:23:00.923 "data_size": 63488 00:23:00.923 }, 00:23:00.923 { 00:23:00.923 "name": "pt4", 00:23:00.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.923 "is_configured": true, 00:23:00.923 "data_offset": 2048, 00:23:00.923 "data_size": 63488 00:23:00.923 } 00:23:00.923 ] 00:23:00.923 }' 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.923 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.185 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:01.185 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:01.185 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.185 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.448 [2024-12-09 23:04:36.580965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 38914642-6a49-4bb1-9c89-227cdfdbb9cf '!=' 38914642-6a49-4bb1-9c89-227cdfdbb9cf ']' 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72596 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72596 ']' 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72596 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72596 00:23:01.448 killing process with pid 72596 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72596' 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72596 00:23:01.448 [2024-12-09 23:04:36.631389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:01.448 [2024-12-09 23:04:36.631456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.448 23:04:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72596 00:23:01.448 [2024-12-09 23:04:36.631518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.448 [2024-12-09 23:04:36.631528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:01.710 [2024-12-09 23:04:36.828843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.282 23:04:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:02.282 00:23:02.282 real 0m6.247s 00:23:02.282 user 0m10.017s 00:23:02.282 sys 0m1.011s 00:23:02.282 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.282 23:04:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.282 ************************************ 00:23:02.282 END TEST raid_superblock_test 00:23:02.282 ************************************ 00:23:02.282 23:04:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:23:02.282 23:04:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:02.282 23:04:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.282 23:04:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:02.282 ************************************ 00:23:02.282 START TEST raid_read_error_test 00:23:02.282 ************************************ 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X4LoPIo5CV 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73061 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73061 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73061 ']' 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:02.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.282 23:04:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.282 [2024-12-09 23:04:37.512107] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:23:02.282 [2024-12-09 23:04:37.512335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:23:02.543 [2024-12-09 23:04:37.664788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.543 [2024-12-09 23:04:37.765153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.543 [2024-12-09 23:04:37.902039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:02.543 [2024-12-09 23:04:37.902090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.116 BaseBdev1_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.116 true 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.116 [2024-12-09 23:04:38.400641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:03.116 [2024-12-09 23:04:38.400811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.116 [2024-12-09 23:04:38.400838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:03.116 [2024-12-09 23:04:38.400849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.116 [2024-12-09 23:04:38.402986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.116 [2024-12-09 23:04:38.403026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:03.116 BaseBdev1 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.116 BaseBdev2_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.116 true 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.116 [2024-12-09 23:04:38.444508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:03.116 [2024-12-09 23:04:38.444674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.116 [2024-12-09 23:04:38.444712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:03.116 [2024-12-09 23:04:38.444768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.116 [2024-12-09 23:04:38.446898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.116 [2024-12-09 23:04:38.446935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:03.116 BaseBdev2 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.116 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.378 BaseBdev3_malloc 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.378 true 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.378 [2024-12-09 23:04:38.497116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:03.378 [2024-12-09 23:04:38.497277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.378 [2024-12-09 23:04:38.497302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:03.378 [2024-12-09 23:04:38.497313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.378 [2024-12-09 23:04:38.499476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.378 [2024-12-09 23:04:38.499597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:03.378 BaseBdev3 00:23:03.378 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.379 BaseBdev4_malloc 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.379 true 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.379 [2024-12-09 23:04:38.541138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:03.379 [2024-12-09 23:04:38.541285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.379 [2024-12-09 23:04:38.541308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:03.379 [2024-12-09 23:04:38.541318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.379 [2024-12-09 23:04:38.543519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.379 [2024-12-09 23:04:38.543633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:03.379 BaseBdev4 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.379 [2024-12-09 23:04:38.549200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.379 [2024-12-09 23:04:38.551127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.379 [2024-12-09 23:04:38.551279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:03.379 [2024-12-09 23:04:38.551403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:03.379 [2024-12-09 23:04:38.551640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:03.379 [2024-12-09 23:04:38.551654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:03.379 [2024-12-09 23:04:38.551898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:23:03.379 [2024-12-09 23:04:38.552049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:03.379 [2024-12-09 23:04:38.552057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:03.379 [2024-12-09 23:04:38.552234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.379 "name": "raid_bdev1", 00:23:03.379 "uuid": "856a3eea-af51-4689-96f4-06633eb097c4", 00:23:03.379 "strip_size_kb": 0, 00:23:03.379 "state": "online", 00:23:03.379 "raid_level": "raid1", 00:23:03.379 "superblock": true, 00:23:03.379 "num_base_bdevs": 4, 00:23:03.379 "num_base_bdevs_discovered": 4, 00:23:03.379 "num_base_bdevs_operational": 4, 00:23:03.379 "base_bdevs_list": [ 00:23:03.379 { 00:23:03.379 "name": "BaseBdev1", 00:23:03.379 "uuid": "b1349edd-3233-556d-b994-a5d111ff915e", 00:23:03.379 "is_configured": true, 00:23:03.379 "data_offset": 2048, 00:23:03.379 "data_size": 63488 00:23:03.379 }, 00:23:03.379 { 00:23:03.379 "name": "BaseBdev2", 00:23:03.379 "uuid": "6d7b3979-3435-5a36-b378-c94be27db26f", 00:23:03.379 "is_configured": true, 00:23:03.379 "data_offset": 2048, 00:23:03.379 "data_size": 63488 00:23:03.379 }, 00:23:03.379 { 00:23:03.379 "name": "BaseBdev3", 00:23:03.379 "uuid": "a81cc058-8e50-5e59-9bee-10de3ad9aeac", 00:23:03.379 "is_configured": true, 00:23:03.379 "data_offset": 2048, 00:23:03.379 "data_size": 63488 00:23:03.379 }, 00:23:03.379 { 00:23:03.379 "name": "BaseBdev4", 00:23:03.379 "uuid": "7af0747c-fab5-5ed0-81e8-13a2cd111a00", 00:23:03.379 "is_configured": true, 00:23:03.379 "data_offset": 2048, 00:23:03.379 "data_size": 63488 00:23:03.379 } 00:23:03.379 ] 00:23:03.379 }' 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.379 23:04:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.639 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:03.639 23:04:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:03.639 [2024-12-09 23:04:38.946237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.582 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.583 "name": "raid_bdev1", 00:23:04.583 "uuid": "856a3eea-af51-4689-96f4-06633eb097c4", 00:23:04.583 "strip_size_kb": 0, 00:23:04.583 "state": "online", 00:23:04.583 "raid_level": "raid1", 00:23:04.583 "superblock": true, 00:23:04.583 "num_base_bdevs": 4, 00:23:04.583 "num_base_bdevs_discovered": 4, 00:23:04.583 "num_base_bdevs_operational": 4, 00:23:04.583 "base_bdevs_list": [ 00:23:04.583 { 00:23:04.583 "name": "BaseBdev1", 00:23:04.583 "uuid": "b1349edd-3233-556d-b994-a5d111ff915e", 00:23:04.583 "is_configured": true, 00:23:04.583 "data_offset": 2048, 00:23:04.583 "data_size": 63488 00:23:04.583 }, 00:23:04.583 { 00:23:04.583 "name": "BaseBdev2", 00:23:04.583 "uuid": "6d7b3979-3435-5a36-b378-c94be27db26f", 00:23:04.583 "is_configured": true, 00:23:04.583 "data_offset": 2048, 00:23:04.583 "data_size": 63488 00:23:04.583 }, 00:23:04.583 { 00:23:04.583 "name": "BaseBdev3", 00:23:04.583 "uuid": "a81cc058-8e50-5e59-9bee-10de3ad9aeac", 00:23:04.583 "is_configured": true, 00:23:04.583 "data_offset": 2048, 00:23:04.583 "data_size": 63488 00:23:04.583 }, 00:23:04.583 { 00:23:04.583 "name": "BaseBdev4", 00:23:04.583 "uuid": "7af0747c-fab5-5ed0-81e8-13a2cd111a00", 00:23:04.583 "is_configured": true, 00:23:04.583 "data_offset": 2048, 00:23:04.583 "data_size": 63488 00:23:04.583 } 00:23:04.583 ] 00:23:04.583 }' 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.583 23:04:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.843 23:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:04.843 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.843 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.843 [2024-12-09 23:04:40.195025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:04.843 [2024-12-09 23:04:40.195055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.843 [2024-12-09 23:04:40.198229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.843 [2024-12-09 23:04:40.198373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.843 [2024-12-09 23:04:40.198576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.843 [2024-12-09 23:04:40.198664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:04.843 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.843 { 00:23:04.843 "results": [ 00:23:04.843 { 00:23:04.843 "job": "raid_bdev1", 00:23:04.843 "core_mask": "0x1", 00:23:04.843 "workload": "randrw", 00:23:04.843 "percentage": 50, 00:23:04.843 "status": "finished", 00:23:04.843 "queue_depth": 1, 00:23:04.843 "io_size": 131072, 00:23:04.843 "runtime": 1.246941, 00:23:04.843 "iops": 10880.2260892857, 00:23:04.844 "mibps": 1360.0282611607124, 00:23:04.844 "io_failed": 0, 00:23:04.844 "io_timeout": 0, 00:23:04.844 "avg_latency_us": 88.61126965317428, 00:23:04.844 "min_latency_us": 30.916923076923077, 00:23:04.844 "max_latency_us": 1777.033846153846 00:23:04.844 } 00:23:04.844 ], 00:23:04.844 "core_count": 1 00:23:04.844 } 00:23:04.844 23:04:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73061 00:23:04.844 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73061 ']' 00:23:04.844 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73061 00:23:04.844 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73061 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73061' 00:23:05.105 killing process with pid 73061 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73061 00:23:05.105 [2024-12-09 23:04:40.228665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.105 23:04:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73061 00:23:05.105 [2024-12-09 23:04:40.429075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X4LoPIo5CV 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:06.047 00:23:06.047 real 0m3.758s 00:23:06.047 user 0m4.429s 00:23:06.047 sys 0m0.396s 00:23:06.047 ************************************ 00:23:06.047 END TEST raid_read_error_test 00:23:06.047 ************************************ 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.047 23:04:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.047 23:04:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:23:06.047 23:04:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:06.047 23:04:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.047 23:04:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:06.047 ************************************ 00:23:06.047 START TEST raid_write_error_test 00:23:06.047 ************************************ 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:06.047 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xa033zTRs0 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73196 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73196 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73196 ']' 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.048 23:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.048 [2024-12-09 23:04:41.314218] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:23:06.048 [2024-12-09 23:04:41.314337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73196 ] 00:23:06.310 [2024-12-09 23:04:41.470904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.310 [2024-12-09 23:04:41.557786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.310 [2024-12-09 23:04:41.669801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.310 [2024-12-09 23:04:41.669835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.889 BaseBdev1_malloc 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.889 true 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.889 [2024-12-09 23:04:42.225395] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:06.889 [2024-12-09 23:04:42.225573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.889 [2024-12-09 23:04:42.225612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:06.889 [2024-12-09 23:04:42.225797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.889 [2024-12-09 23:04:42.227614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.889 [2024-12-09 23:04:42.227647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:06.889 BaseBdev1 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.889 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 BaseBdev2_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 true 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 [2024-12-09 23:04:42.264917] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:07.152 [2024-12-09 23:04:42.265059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.152 [2024-12-09 23:04:42.265078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:07.152 [2024-12-09 23:04:42.265087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.152 [2024-12-09 23:04:42.266857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.152 [2024-12-09 23:04:42.266884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:07.152 BaseBdev2 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 BaseBdev3_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 true 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 [2024-12-09 23:04:42.318425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:07.152 [2024-12-09 23:04:42.318580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.152 [2024-12-09 23:04:42.318616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:07.152 [2024-12-09 23:04:42.318789] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.152 [2024-12-09 23:04:42.320609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.152 BaseBdev3 00:23:07.152 [2024-12-09 23:04:42.320720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 BaseBdev4_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 true 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.152 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.152 [2024-12-09 23:04:42.358241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:07.152 [2024-12-09 23:04:42.358376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.152 [2024-12-09 23:04:42.358410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:07.153 [2024-12-09 23:04:42.358559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.153 [2024-12-09 23:04:42.360330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.153 [2024-12-09 23:04:42.360437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:07.153 BaseBdev4 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.153 [2024-12-09 23:04:42.366289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:07.153 [2024-12-09 23:04:42.367873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:07.153 [2024-12-09 23:04:42.368010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:07.153 [2024-12-09 23:04:42.368140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:07.153 [2024-12-09 23:04:42.368410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:07.153 [2024-12-09 23:04:42.368483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:07.153 [2024-12-09 23:04:42.368744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:23:07.153 [2024-12-09 23:04:42.368958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:07.153 [2024-12-09 23:04:42.369024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:07.153 [2024-12-09 23:04:42.369225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.153 "name": "raid_bdev1", 00:23:07.153 "uuid": "f71aef3e-de17-4bbc-9fda-e8105dcfdf18", 00:23:07.153 "strip_size_kb": 0, 00:23:07.153 "state": "online", 00:23:07.153 "raid_level": "raid1", 00:23:07.153 "superblock": true, 00:23:07.153 "num_base_bdevs": 4, 00:23:07.153 "num_base_bdevs_discovered": 4, 00:23:07.153 "num_base_bdevs_operational": 4, 00:23:07.153 "base_bdevs_list": [ 00:23:07.153 { 00:23:07.153 "name": "BaseBdev1", 00:23:07.153 "uuid": "55a6b366-127d-53e9-b764-3370458c7331", 00:23:07.153 "is_configured": true, 00:23:07.153 "data_offset": 2048, 00:23:07.153 "data_size": 63488 00:23:07.153 }, 00:23:07.153 { 00:23:07.153 "name": "BaseBdev2", 00:23:07.153 "uuid": "997dc2bb-3820-5429-901c-f6668bdbb312", 00:23:07.153 "is_configured": true, 00:23:07.153 "data_offset": 2048, 00:23:07.153 "data_size": 63488 00:23:07.153 }, 00:23:07.153 { 00:23:07.153 "name": "BaseBdev3", 00:23:07.153 "uuid": "0ff73a9d-898c-59d0-877b-c8272f8ed2a2", 00:23:07.153 "is_configured": true, 00:23:07.153 "data_offset": 2048, 00:23:07.153 "data_size": 63488 00:23:07.153 }, 00:23:07.153 { 00:23:07.153 "name": "BaseBdev4", 00:23:07.153 "uuid": "a83760bb-666e-546b-9af7-1af357d526be", 00:23:07.153 "is_configured": true, 00:23:07.153 "data_offset": 2048, 00:23:07.153 "data_size": 63488 00:23:07.153 } 00:23:07.153 ] 00:23:07.153 }' 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.153 23:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.410 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:07.410 23:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:07.410 [2024-12-09 23:04:42.767147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 [2024-12-09 23:04:43.685172] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:23:08.345 [2024-12-09 23:04:43.685228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:08.345 [2024-12-09 23:04:43.685432] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:08.345 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.346 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.603 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.603 "name": "raid_bdev1", 00:23:08.603 "uuid": "f71aef3e-de17-4bbc-9fda-e8105dcfdf18", 00:23:08.603 "strip_size_kb": 0, 00:23:08.603 "state": "online", 00:23:08.603 "raid_level": "raid1", 00:23:08.603 "superblock": true, 00:23:08.603 "num_base_bdevs": 4, 00:23:08.603 "num_base_bdevs_discovered": 3, 00:23:08.603 "num_base_bdevs_operational": 3, 00:23:08.603 "base_bdevs_list": [ 00:23:08.603 { 00:23:08.603 "name": null, 00:23:08.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.603 "is_configured": false, 00:23:08.603 "data_offset": 0, 00:23:08.603 "data_size": 63488 00:23:08.603 }, 00:23:08.603 { 00:23:08.603 "name": "BaseBdev2", 00:23:08.603 "uuid": "997dc2bb-3820-5429-901c-f6668bdbb312", 00:23:08.603 "is_configured": true, 00:23:08.603 "data_offset": 2048, 00:23:08.603 "data_size": 63488 00:23:08.603 }, 00:23:08.603 { 00:23:08.603 "name": "BaseBdev3", 00:23:08.603 "uuid": "0ff73a9d-898c-59d0-877b-c8272f8ed2a2", 00:23:08.603 "is_configured": true, 00:23:08.603 "data_offset": 2048, 00:23:08.603 "data_size": 63488 00:23:08.603 }, 00:23:08.603 { 00:23:08.603 "name": "BaseBdev4", 00:23:08.603 "uuid": "a83760bb-666e-546b-9af7-1af357d526be", 00:23:08.603 "is_configured": true, 00:23:08.603 "data_offset": 2048, 00:23:08.603 "data_size": 63488 00:23:08.603 } 00:23:08.603 ] 00:23:08.603 }' 00:23:08.603 23:04:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.603 23:04:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.860 [2024-12-09 23:04:44.016918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.860 [2024-12-09 23:04:44.016951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:08.860 [2024-12-09 23:04:44.019628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.860 [2024-12-09 23:04:44.019731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.860 [2024-12-09 23:04:44.019839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.860 [2024-12-09 23:04:44.019904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:08.860 { 00:23:08.860 "results": [ 00:23:08.860 { 00:23:08.860 "job": "raid_bdev1", 00:23:08.860 "core_mask": "0x1", 00:23:08.860 "workload": "randrw", 00:23:08.860 "percentage": 50, 00:23:08.860 "status": "finished", 00:23:08.860 "queue_depth": 1, 00:23:08.860 "io_size": 131072, 00:23:08.860 "runtime": 1.248005, 00:23:08.860 "iops": 13625.746691719985, 00:23:08.860 "mibps": 1703.2183364649982, 00:23:08.860 "io_failed": 0, 00:23:08.860 "io_timeout": 0, 00:23:08.860 "avg_latency_us": 70.77839223757718, 00:23:08.860 "min_latency_us": 23.630769230769232, 00:23:08.860 "max_latency_us": 1392.64 00:23:08.860 } 00:23:08.860 ], 00:23:08.860 "core_count": 1 00:23:08.860 } 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73196 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73196 ']' 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73196 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73196 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73196' 00:23:08.860 killing process with pid 73196 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73196 00:23:08.860 [2024-12-09 23:04:44.051346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:08.860 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73196 00:23:08.860 [2024-12-09 23:04:44.213940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xa033zTRs0 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:09.802 23:04:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:09.802 00:23:09.802 real 0m3.596s 00:23:09.803 user 0m4.308s 00:23:09.803 sys 0m0.393s 00:23:09.803 ************************************ 00:23:09.803 END TEST raid_write_error_test 00:23:09.803 ************************************ 00:23:09.803 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.803 23:04:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.803 23:04:44 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:23:09.803 23:04:44 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:23:09.803 23:04:44 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:23:09.803 23:04:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:09.803 23:04:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.803 23:04:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:09.803 ************************************ 00:23:09.803 START TEST raid_rebuild_test 00:23:09.803 ************************************ 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73329 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73329 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 73329 ']' 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.803 23:04:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:09.803 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:09.803 Zero copy mechanism will not be used. 00:23:09.803 [2024-12-09 23:04:44.956473] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:23:09.803 [2024-12-09 23:04:44.956617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73329 ] 00:23:09.803 [2024-12-09 23:04:45.114426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.069 [2024-12-09 23:04:45.201445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.069 [2024-12-09 23:04:45.314421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:10.069 [2024-12-09 23:04:45.314460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 BaseBdev1_malloc 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 [2024-12-09 23:04:45.859971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:10.636 [2024-12-09 23:04:45.860147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.636 [2024-12-09 23:04:45.860187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:10.636 [2024-12-09 23:04:45.860250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.636 [2024-12-09 23:04:45.862095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.636 [2024-12-09 23:04:45.862216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:10.636 BaseBdev1 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 BaseBdev2_malloc 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 [2024-12-09 23:04:45.892313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:10.636 [2024-12-09 23:04:45.892464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.636 [2024-12-09 23:04:45.892503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:10.636 [2024-12-09 23:04:45.892513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.636 [2024-12-09 23:04:45.894348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.636 [2024-12-09 23:04:45.894458] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:10.636 BaseBdev2 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 spare_malloc 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 spare_delay 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 [2024-12-09 23:04:45.946281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.636 [2024-12-09 23:04:45.946428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.636 [2024-12-09 23:04:45.946463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:10.636 [2024-12-09 23:04:45.947032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.636 [2024-12-09 23:04:45.949005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.636 [2024-12-09 23:04:45.949120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.636 spare 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.636 [2024-12-09 23:04:45.954330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.636 [2024-12-09 23:04:45.955843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.636 [2024-12-09 23:04:45.955920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:10.636 [2024-12-09 23:04:45.955932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:10.636 [2024-12-09 23:04:45.956175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:10.636 [2024-12-09 23:04:45.956301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:10.636 [2024-12-09 23:04:45.956310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:10.636 [2024-12-09 23:04:45.956432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.636 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.637 "name": "raid_bdev1", 00:23:10.637 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:10.637 "strip_size_kb": 0, 00:23:10.637 "state": "online", 00:23:10.637 "raid_level": "raid1", 00:23:10.637 "superblock": false, 00:23:10.637 "num_base_bdevs": 2, 00:23:10.637 "num_base_bdevs_discovered": 2, 00:23:10.637 "num_base_bdevs_operational": 2, 00:23:10.637 "base_bdevs_list": [ 00:23:10.637 { 00:23:10.637 "name": "BaseBdev1", 00:23:10.637 "uuid": "579ef2be-a53c-5aa2-b076-0f2e6dea0871", 00:23:10.637 "is_configured": true, 00:23:10.637 "data_offset": 0, 00:23:10.637 "data_size": 65536 00:23:10.637 }, 00:23:10.637 { 00:23:10.637 "name": "BaseBdev2", 00:23:10.637 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:10.637 "is_configured": true, 00:23:10.637 "data_offset": 0, 00:23:10.637 "data_size": 65536 00:23:10.637 } 00:23:10.637 ] 00:23:10.637 }' 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.637 23:04:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.207 [2024-12-09 23:04:46.270642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:11.207 [2024-12-09 23:04:46.522465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:11.207 /dev/nbd0 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.207 1+0 records in 00:23:11.207 1+0 records out 00:23:11.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385809 s, 10.6 MB/s 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:11.207 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:11.466 23:04:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:15.645 65536+0 records in 00:23:15.645 65536+0 records out 00:23:15.645 33554432 bytes (34 MB, 32 MiB) copied, 4.29456 s, 7.8 MB/s 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:15.645 23:04:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:15.901 [2024-12-09 23:04:51.023854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.901 [2024-12-09 23:04:51.048751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.901 "name": "raid_bdev1", 00:23:15.901 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:15.901 "strip_size_kb": 0, 00:23:15.901 "state": "online", 00:23:15.901 "raid_level": "raid1", 00:23:15.901 "superblock": false, 00:23:15.901 "num_base_bdevs": 2, 00:23:15.901 "num_base_bdevs_discovered": 1, 00:23:15.901 "num_base_bdevs_operational": 1, 00:23:15.901 "base_bdevs_list": [ 00:23:15.901 { 00:23:15.901 "name": null, 00:23:15.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.901 "is_configured": false, 00:23:15.901 "data_offset": 0, 00:23:15.901 "data_size": 65536 00:23:15.901 }, 00:23:15.901 { 00:23:15.901 "name": "BaseBdev2", 00:23:15.901 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:15.901 "is_configured": true, 00:23:15.901 "data_offset": 0, 00:23:15.901 "data_size": 65536 00:23:15.901 } 00:23:15.901 ] 00:23:15.901 }' 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.901 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.157 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:16.157 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.157 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.157 [2024-12-09 23:04:51.344813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:16.157 [2024-12-09 23:04:51.354556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:23:16.157 23:04:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.157 23:04:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:16.157 [2024-12-09 23:04:51.356328] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.089 "name": "raid_bdev1", 00:23:17.089 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:17.089 "strip_size_kb": 0, 00:23:17.089 "state": "online", 00:23:17.089 "raid_level": "raid1", 00:23:17.089 "superblock": false, 00:23:17.089 "num_base_bdevs": 2, 00:23:17.089 "num_base_bdevs_discovered": 2, 00:23:17.089 "num_base_bdevs_operational": 2, 00:23:17.089 "process": { 00:23:17.089 "type": "rebuild", 00:23:17.089 "target": "spare", 00:23:17.089 "progress": { 00:23:17.089 "blocks": 20480, 00:23:17.089 "percent": 31 00:23:17.089 } 00:23:17.089 }, 00:23:17.089 "base_bdevs_list": [ 00:23:17.089 { 00:23:17.089 "name": "spare", 00:23:17.089 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:17.089 "is_configured": true, 00:23:17.089 "data_offset": 0, 00:23:17.089 "data_size": 65536 00:23:17.089 }, 00:23:17.089 { 00:23:17.089 "name": "BaseBdev2", 00:23:17.089 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:17.089 "is_configured": true, 00:23:17.089 "data_offset": 0, 00:23:17.089 "data_size": 65536 00:23:17.089 } 00:23:17.089 ] 00:23:17.089 }' 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.089 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.090 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:17.090 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.090 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.347 [2024-12-09 23:04:52.450308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:17.347 [2024-12-09 23:04:52.461669] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:17.347 [2024-12-09 23:04:52.461915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.347 [2024-12-09 23:04:52.461932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:17.347 [2024-12-09 23:04:52.461941] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.347 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.347 "name": "raid_bdev1", 00:23:17.347 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:17.347 "strip_size_kb": 0, 00:23:17.347 "state": "online", 00:23:17.347 "raid_level": "raid1", 00:23:17.347 "superblock": false, 00:23:17.347 "num_base_bdevs": 2, 00:23:17.347 "num_base_bdevs_discovered": 1, 00:23:17.347 "num_base_bdevs_operational": 1, 00:23:17.347 "base_bdevs_list": [ 00:23:17.347 { 00:23:17.347 "name": null, 00:23:17.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.347 "is_configured": false, 00:23:17.347 "data_offset": 0, 00:23:17.347 "data_size": 65536 00:23:17.347 }, 00:23:17.347 { 00:23:17.347 "name": "BaseBdev2", 00:23:17.347 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:17.347 "is_configured": true, 00:23:17.347 "data_offset": 0, 00:23:17.347 "data_size": 65536 00:23:17.347 } 00:23:17.347 ] 00:23:17.347 }' 00:23:17.348 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.348 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.606 "name": "raid_bdev1", 00:23:17.606 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:17.606 "strip_size_kb": 0, 00:23:17.606 "state": "online", 00:23:17.606 "raid_level": "raid1", 00:23:17.606 "superblock": false, 00:23:17.606 "num_base_bdevs": 2, 00:23:17.606 "num_base_bdevs_discovered": 1, 00:23:17.606 "num_base_bdevs_operational": 1, 00:23:17.606 "base_bdevs_list": [ 00:23:17.606 { 00:23:17.606 "name": null, 00:23:17.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.606 "is_configured": false, 00:23:17.606 "data_offset": 0, 00:23:17.606 "data_size": 65536 00:23:17.606 }, 00:23:17.606 { 00:23:17.606 "name": "BaseBdev2", 00:23:17.606 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:17.606 "is_configured": true, 00:23:17.606 "data_offset": 0, 00:23:17.606 "data_size": 65536 00:23:17.606 } 00:23:17.606 ] 00:23:17.606 }' 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.606 [2024-12-09 23:04:52.857432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.606 [2024-12-09 23:04:52.866882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.606 23:04:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:17.606 [2024-12-09 23:04:52.868541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.539 23:04:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.798 "name": "raid_bdev1", 00:23:18.798 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:18.798 "strip_size_kb": 0, 00:23:18.798 "state": "online", 00:23:18.798 "raid_level": "raid1", 00:23:18.798 "superblock": false, 00:23:18.798 "num_base_bdevs": 2, 00:23:18.798 "num_base_bdevs_discovered": 2, 00:23:18.798 "num_base_bdevs_operational": 2, 00:23:18.798 "process": { 00:23:18.798 "type": "rebuild", 00:23:18.798 "target": "spare", 00:23:18.798 "progress": { 00:23:18.798 "blocks": 20480, 00:23:18.798 "percent": 31 00:23:18.798 } 00:23:18.798 }, 00:23:18.798 "base_bdevs_list": [ 00:23:18.798 { 00:23:18.798 "name": "spare", 00:23:18.798 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:18.798 "is_configured": true, 00:23:18.798 "data_offset": 0, 00:23:18.798 "data_size": 65536 00:23:18.798 }, 00:23:18.798 { 00:23:18.798 "name": "BaseBdev2", 00:23:18.798 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:18.798 "is_configured": true, 00:23:18.798 "data_offset": 0, 00:23:18.798 "data_size": 65536 00:23:18.798 } 00:23:18.798 ] 00:23:18.798 }' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.798 "name": "raid_bdev1", 00:23:18.798 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:18.798 "strip_size_kb": 0, 00:23:18.798 "state": "online", 00:23:18.798 "raid_level": "raid1", 00:23:18.798 "superblock": false, 00:23:18.798 "num_base_bdevs": 2, 00:23:18.798 "num_base_bdevs_discovered": 2, 00:23:18.798 "num_base_bdevs_operational": 2, 00:23:18.798 "process": { 00:23:18.798 "type": "rebuild", 00:23:18.798 "target": "spare", 00:23:18.798 "progress": { 00:23:18.798 "blocks": 20480, 00:23:18.798 "percent": 31 00:23:18.798 } 00:23:18.798 }, 00:23:18.798 "base_bdevs_list": [ 00:23:18.798 { 00:23:18.798 "name": "spare", 00:23:18.798 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:18.798 "is_configured": true, 00:23:18.798 "data_offset": 0, 00:23:18.798 "data_size": 65536 00:23:18.798 }, 00:23:18.798 { 00:23:18.798 "name": "BaseBdev2", 00:23:18.798 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:18.798 "is_configured": true, 00:23:18.798 "data_offset": 0, 00:23:18.798 "data_size": 65536 00:23:18.798 } 00:23:18.798 ] 00:23:18.798 }' 00:23:18.798 23:04:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.798 23:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.798 23:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.798 23:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.798 23:04:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.730 23:04:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.991 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:19.991 "name": "raid_bdev1", 00:23:19.991 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:19.991 "strip_size_kb": 0, 00:23:19.991 "state": "online", 00:23:19.991 "raid_level": "raid1", 00:23:19.991 "superblock": false, 00:23:19.991 "num_base_bdevs": 2, 00:23:19.991 "num_base_bdevs_discovered": 2, 00:23:19.991 "num_base_bdevs_operational": 2, 00:23:19.991 "process": { 00:23:19.991 "type": "rebuild", 00:23:19.991 "target": "spare", 00:23:19.991 "progress": { 00:23:19.991 "blocks": 43008, 00:23:19.991 "percent": 65 00:23:19.991 } 00:23:19.991 }, 00:23:19.991 "base_bdevs_list": [ 00:23:19.991 { 00:23:19.991 "name": "spare", 00:23:19.991 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:19.991 "is_configured": true, 00:23:19.991 "data_offset": 0, 00:23:19.991 "data_size": 65536 00:23:19.991 }, 00:23:19.991 { 00:23:19.991 "name": "BaseBdev2", 00:23:19.991 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:19.991 "is_configured": true, 00:23:19.991 "data_offset": 0, 00:23:19.991 "data_size": 65536 00:23:19.991 } 00:23:19.991 ] 00:23:19.991 }' 00:23:19.991 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:19.991 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.991 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:19.991 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.991 23:04:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:20.930 [2024-12-09 23:04:56.083667] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:20.930 [2024-12-09 23:04:56.083852] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:20.930 [2024-12-09 23:04:56.083901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:20.930 "name": "raid_bdev1", 00:23:20.930 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:20.930 "strip_size_kb": 0, 00:23:20.930 "state": "online", 00:23:20.930 "raid_level": "raid1", 00:23:20.930 "superblock": false, 00:23:20.930 "num_base_bdevs": 2, 00:23:20.930 "num_base_bdevs_discovered": 2, 00:23:20.930 "num_base_bdevs_operational": 2, 00:23:20.930 "base_bdevs_list": [ 00:23:20.930 { 00:23:20.930 "name": "spare", 00:23:20.930 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:20.930 "is_configured": true, 00:23:20.930 "data_offset": 0, 00:23:20.930 "data_size": 65536 00:23:20.930 }, 00:23:20.930 { 00:23:20.930 "name": "BaseBdev2", 00:23:20.930 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:20.930 "is_configured": true, 00:23:20.930 "data_offset": 0, 00:23:20.930 "data_size": 65536 00:23:20.930 } 00:23:20.930 ] 00:23:20.930 }' 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.930 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:21.188 "name": "raid_bdev1", 00:23:21.188 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:21.188 "strip_size_kb": 0, 00:23:21.188 "state": "online", 00:23:21.188 "raid_level": "raid1", 00:23:21.188 "superblock": false, 00:23:21.188 "num_base_bdevs": 2, 00:23:21.188 "num_base_bdevs_discovered": 2, 00:23:21.188 "num_base_bdevs_operational": 2, 00:23:21.188 "base_bdevs_list": [ 00:23:21.188 { 00:23:21.188 "name": "spare", 00:23:21.188 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:21.188 "is_configured": true, 00:23:21.188 "data_offset": 0, 00:23:21.188 "data_size": 65536 00:23:21.188 }, 00:23:21.188 { 00:23:21.188 "name": "BaseBdev2", 00:23:21.188 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:21.188 "is_configured": true, 00:23:21.188 "data_offset": 0, 00:23:21.188 "data_size": 65536 00:23:21.188 } 00:23:21.188 ] 00:23:21.188 }' 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.188 "name": "raid_bdev1", 00:23:21.188 "uuid": "750d0b69-1824-4b9f-a892-8161ed190bb4", 00:23:21.188 "strip_size_kb": 0, 00:23:21.188 "state": "online", 00:23:21.188 "raid_level": "raid1", 00:23:21.188 "superblock": false, 00:23:21.188 "num_base_bdevs": 2, 00:23:21.188 "num_base_bdevs_discovered": 2, 00:23:21.188 "num_base_bdevs_operational": 2, 00:23:21.188 "base_bdevs_list": [ 00:23:21.188 { 00:23:21.188 "name": "spare", 00:23:21.188 "uuid": "bddc638e-8550-5f80-be42-3523c2215956", 00:23:21.188 "is_configured": true, 00:23:21.188 "data_offset": 0, 00:23:21.188 "data_size": 65536 00:23:21.188 }, 00:23:21.188 { 00:23:21.188 "name": "BaseBdev2", 00:23:21.188 "uuid": "48878f2a-b15d-5c73-9624-a67119ad28ef", 00:23:21.188 "is_configured": true, 00:23:21.188 "data_offset": 0, 00:23:21.188 "data_size": 65536 00:23:21.188 } 00:23:21.188 ] 00:23:21.188 }' 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.188 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.448 [2024-12-09 23:04:56.726847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.448 [2024-12-09 23:04:56.726874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:21.448 [2024-12-09 23:04:56.726938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:21.448 [2024-12-09 23:04:56.726994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:21.448 [2024-12-09 23:04:56.727002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.448 23:04:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:21.705 /dev/nbd0 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:21.965 1+0 records in 00:23:21.965 1+0 records out 00:23:21.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302567 s, 13.5 MB/s 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:21.965 /dev/nbd1 00:23:21.965 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:22.238 1+0 records in 00:23:22.238 1+0 records out 00:23:22.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306188 s, 13.4 MB/s 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:22.238 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73329 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 73329 ']' 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 73329 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.498 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73329 00:23:22.756 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.756 killing process with pid 73329 00:23:22.756 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.756 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73329' 00:23:22.756 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 73329 00:23:22.756 Received shutdown signal, test time was about 60.000000 seconds 00:23:22.756 00:23:22.756 Latency(us) 00:23:22.756 [2024-12-09T23:04:58.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.756 [2024-12-09T23:04:58.119Z] =================================================================================================================== 00:23:22.756 [2024-12-09T23:04:58.119Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.756 [2024-12-09 23:04:57.866837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:22.756 23:04:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 73329 00:23:22.756 [2024-12-09 23:04:58.018290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.322 23:04:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:23.322 ************************************ 00:23:23.322 END TEST raid_rebuild_test 00:23:23.322 ************************************ 00:23:23.322 00:23:23.322 real 0m13.721s 00:23:23.322 user 0m15.262s 00:23:23.322 sys 0m2.529s 00:23:23.322 23:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.322 23:04:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.322 23:04:58 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:23:23.322 23:04:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:23.322 23:04:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.323 23:04:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.323 ************************************ 00:23:23.323 START TEST raid_rebuild_test_sb 00:23:23.323 ************************************ 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73736 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73736 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73736 ']' 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.323 23:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.581 [2024-12-09 23:04:58.710391] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:23:23.581 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:23.581 Zero copy mechanism will not be used. 00:23:23.581 [2024-12-09 23:04:58.710494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73736 ] 00:23:23.581 [2024-12-09 23:04:58.866152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.837 [2024-12-09 23:04:58.969883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.837 [2024-12-09 23:04:59.107556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:23.837 [2024-12-09 23:04:59.107612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.402 BaseBdev1_malloc 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.402 [2024-12-09 23:04:59.551398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:24.402 [2024-12-09 23:04:59.551458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.402 [2024-12-09 23:04:59.551478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:24.402 [2024-12-09 23:04:59.551490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.402 [2024-12-09 23:04:59.553606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.402 [2024-12-09 23:04:59.553645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:24.402 BaseBdev1 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.402 BaseBdev2_malloc 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.402 [2024-12-09 23:04:59.587522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:24.402 [2024-12-09 23:04:59.587575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.402 [2024-12-09 23:04:59.587595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:24.402 [2024-12-09 23:04:59.587607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.402 [2024-12-09 23:04:59.589678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.402 [2024-12-09 23:04:59.589714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:24.402 BaseBdev2 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.402 spare_malloc 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.402 spare_delay 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.402 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.403 [2024-12-09 23:04:59.639954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:24.403 [2024-12-09 23:04:59.640006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.403 [2024-12-09 23:04:59.640022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:24.403 [2024-12-09 23:04:59.640033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.403 [2024-12-09 23:04:59.642176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.403 [2024-12-09 23:04:59.642209] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:24.403 spare 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.403 [2024-12-09 23:04:59.648011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:24.403 [2024-12-09 23:04:59.649835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:24.403 [2024-12-09 23:04:59.649997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:24.403 [2024-12-09 23:04:59.650016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:24.403 [2024-12-09 23:04:59.650266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:24.403 [2024-12-09 23:04:59.650419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:24.403 [2024-12-09 23:04:59.650434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:24.403 [2024-12-09 23:04:59.650564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.403 "name": "raid_bdev1", 00:23:24.403 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:24.403 "strip_size_kb": 0, 00:23:24.403 "state": "online", 00:23:24.403 "raid_level": "raid1", 00:23:24.403 "superblock": true, 00:23:24.403 "num_base_bdevs": 2, 00:23:24.403 "num_base_bdevs_discovered": 2, 00:23:24.403 "num_base_bdevs_operational": 2, 00:23:24.403 "base_bdevs_list": [ 00:23:24.403 { 00:23:24.403 "name": "BaseBdev1", 00:23:24.403 "uuid": "630af9c0-54d5-57e7-ab07-ae766922d59a", 00:23:24.403 "is_configured": true, 00:23:24.403 "data_offset": 2048, 00:23:24.403 "data_size": 63488 00:23:24.403 }, 00:23:24.403 { 00:23:24.403 "name": "BaseBdev2", 00:23:24.403 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:24.403 "is_configured": true, 00:23:24.403 "data_offset": 2048, 00:23:24.403 "data_size": 63488 00:23:24.403 } 00:23:24.403 ] 00:23:24.403 }' 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.403 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:24.659 23:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:24.659 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.659 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 [2024-12-09 23:04:59.976404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.659 23:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.659 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:23:24.659 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.659 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.659 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.659 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:24.659 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:24.960 [2024-12-09 23:05:00.188198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:24.960 /dev/nbd0 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:24.960 1+0 records in 00:23:24.960 1+0 records out 00:23:24.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281156 s, 14.6 MB/s 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:24.960 23:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:30.222 63488+0 records in 00:23:30.222 63488+0 records out 00:23:30.222 32505856 bytes (33 MB, 31 MiB) copied, 4.45959 s, 7.3 MB/s 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:30.222 [2024-12-09 23:05:04.895436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.222 [2024-12-09 23:05:04.925046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.222 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.223 "name": "raid_bdev1", 00:23:30.223 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:30.223 "strip_size_kb": 0, 00:23:30.223 "state": "online", 00:23:30.223 "raid_level": "raid1", 00:23:30.223 "superblock": true, 00:23:30.223 "num_base_bdevs": 2, 00:23:30.223 "num_base_bdevs_discovered": 1, 00:23:30.223 "num_base_bdevs_operational": 1, 00:23:30.223 "base_bdevs_list": [ 00:23:30.223 { 00:23:30.223 "name": null, 00:23:30.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.223 "is_configured": false, 00:23:30.223 "data_offset": 0, 00:23:30.223 "data_size": 63488 00:23:30.223 }, 00:23:30.223 { 00:23:30.223 "name": "BaseBdev2", 00:23:30.223 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:30.223 "is_configured": true, 00:23:30.223 "data_offset": 2048, 00:23:30.223 "data_size": 63488 00:23:30.223 } 00:23:30.223 ] 00:23:30.223 }' 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.223 23:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.223 23:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:30.223 23:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.223 23:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.223 [2024-12-09 23:05:05.233127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:30.223 [2024-12-09 23:05:05.242802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:23:30.223 23:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.223 23:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:30.223 [2024-12-09 23:05:05.244415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.155 "name": "raid_bdev1", 00:23:31.155 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:31.155 "strip_size_kb": 0, 00:23:31.155 "state": "online", 00:23:31.155 "raid_level": "raid1", 00:23:31.155 "superblock": true, 00:23:31.155 "num_base_bdevs": 2, 00:23:31.155 "num_base_bdevs_discovered": 2, 00:23:31.155 "num_base_bdevs_operational": 2, 00:23:31.155 "process": { 00:23:31.155 "type": "rebuild", 00:23:31.155 "target": "spare", 00:23:31.155 "progress": { 00:23:31.155 "blocks": 20480, 00:23:31.155 "percent": 32 00:23:31.155 } 00:23:31.155 }, 00:23:31.155 "base_bdevs_list": [ 00:23:31.155 { 00:23:31.155 "name": "spare", 00:23:31.155 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:31.155 "is_configured": true, 00:23:31.155 "data_offset": 2048, 00:23:31.155 "data_size": 63488 00:23:31.155 }, 00:23:31.155 { 00:23:31.155 "name": "BaseBdev2", 00:23:31.155 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:31.155 "is_configured": true, 00:23:31.155 "data_offset": 2048, 00:23:31.155 "data_size": 63488 00:23:31.155 } 00:23:31.155 ] 00:23:31.155 }' 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 [2024-12-09 23:05:06.346707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.155 [2024-12-09 23:05:06.349637] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:31.155 [2024-12-09 23:05:06.349688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.155 [2024-12-09 23:05:06.349700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:31.155 [2024-12-09 23:05:06.349710] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.155 "name": "raid_bdev1", 00:23:31.155 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:31.155 "strip_size_kb": 0, 00:23:31.155 "state": "online", 00:23:31.155 "raid_level": "raid1", 00:23:31.155 "superblock": true, 00:23:31.155 "num_base_bdevs": 2, 00:23:31.155 "num_base_bdevs_discovered": 1, 00:23:31.155 "num_base_bdevs_operational": 1, 00:23:31.155 "base_bdevs_list": [ 00:23:31.155 { 00:23:31.155 "name": null, 00:23:31.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.155 "is_configured": false, 00:23:31.155 "data_offset": 0, 00:23:31.155 "data_size": 63488 00:23:31.155 }, 00:23:31.155 { 00:23:31.155 "name": "BaseBdev2", 00:23:31.155 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:31.155 "is_configured": true, 00:23:31.155 "data_offset": 2048, 00:23:31.155 "data_size": 63488 00:23:31.155 } 00:23:31.155 ] 00:23:31.155 }' 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.155 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.413 "name": "raid_bdev1", 00:23:31.413 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:31.413 "strip_size_kb": 0, 00:23:31.413 "state": "online", 00:23:31.413 "raid_level": "raid1", 00:23:31.413 "superblock": true, 00:23:31.413 "num_base_bdevs": 2, 00:23:31.413 "num_base_bdevs_discovered": 1, 00:23:31.413 "num_base_bdevs_operational": 1, 00:23:31.413 "base_bdevs_list": [ 00:23:31.413 { 00:23:31.413 "name": null, 00:23:31.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.413 "is_configured": false, 00:23:31.413 "data_offset": 0, 00:23:31.413 "data_size": 63488 00:23:31.413 }, 00:23:31.413 { 00:23:31.413 "name": "BaseBdev2", 00:23:31.413 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:31.413 "is_configured": true, 00:23:31.413 "data_offset": 2048, 00:23:31.413 "data_size": 63488 00:23:31.413 } 00:23:31.413 ] 00:23:31.413 }' 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:31.413 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.670 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:31.670 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.670 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.670 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.670 [2024-12-09 23:05:06.788801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.670 [2024-12-09 23:05:06.797607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:23:31.670 23:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.670 23:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:31.670 [2024-12-09 23:05:06.799207] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.606 "name": "raid_bdev1", 00:23:32.606 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:32.606 "strip_size_kb": 0, 00:23:32.606 "state": "online", 00:23:32.606 "raid_level": "raid1", 00:23:32.606 "superblock": true, 00:23:32.606 "num_base_bdevs": 2, 00:23:32.606 "num_base_bdevs_discovered": 2, 00:23:32.606 "num_base_bdevs_operational": 2, 00:23:32.606 "process": { 00:23:32.606 "type": "rebuild", 00:23:32.606 "target": "spare", 00:23:32.606 "progress": { 00:23:32.606 "blocks": 20480, 00:23:32.606 "percent": 32 00:23:32.606 } 00:23:32.606 }, 00:23:32.606 "base_bdevs_list": [ 00:23:32.606 { 00:23:32.606 "name": "spare", 00:23:32.606 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:32.606 "is_configured": true, 00:23:32.606 "data_offset": 2048, 00:23:32.606 "data_size": 63488 00:23:32.606 }, 00:23:32.606 { 00:23:32.606 "name": "BaseBdev2", 00:23:32.606 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:32.606 "is_configured": true, 00:23:32.606 "data_offset": 2048, 00:23:32.606 "data_size": 63488 00:23:32.606 } 00:23:32.606 ] 00:23:32.606 }' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:32.606 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=315 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.606 "name": "raid_bdev1", 00:23:32.606 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:32.606 "strip_size_kb": 0, 00:23:32.606 "state": "online", 00:23:32.606 "raid_level": "raid1", 00:23:32.606 "superblock": true, 00:23:32.606 "num_base_bdevs": 2, 00:23:32.606 "num_base_bdevs_discovered": 2, 00:23:32.606 "num_base_bdevs_operational": 2, 00:23:32.606 "process": { 00:23:32.606 "type": "rebuild", 00:23:32.606 "target": "spare", 00:23:32.606 "progress": { 00:23:32.606 "blocks": 20480, 00:23:32.606 "percent": 32 00:23:32.606 } 00:23:32.606 }, 00:23:32.606 "base_bdevs_list": [ 00:23:32.606 { 00:23:32.606 "name": "spare", 00:23:32.606 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:32.606 "is_configured": true, 00:23:32.606 "data_offset": 2048, 00:23:32.606 "data_size": 63488 00:23:32.606 }, 00:23:32.606 { 00:23:32.606 "name": "BaseBdev2", 00:23:32.606 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:32.606 "is_configured": true, 00:23:32.606 "data_offset": 2048, 00:23:32.606 "data_size": 63488 00:23:32.606 } 00:23:32.606 ] 00:23:32.606 }' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.606 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.864 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.864 23:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.810 23:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.810 "name": "raid_bdev1", 00:23:33.810 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:33.810 "strip_size_kb": 0, 00:23:33.810 "state": "online", 00:23:33.810 "raid_level": "raid1", 00:23:33.810 "superblock": true, 00:23:33.810 "num_base_bdevs": 2, 00:23:33.810 "num_base_bdevs_discovered": 2, 00:23:33.810 "num_base_bdevs_operational": 2, 00:23:33.810 "process": { 00:23:33.810 "type": "rebuild", 00:23:33.810 "target": "spare", 00:23:33.810 "progress": { 00:23:33.810 "blocks": 43008, 00:23:33.810 "percent": 67 00:23:33.810 } 00:23:33.810 }, 00:23:33.810 "base_bdevs_list": [ 00:23:33.810 { 00:23:33.810 "name": "spare", 00:23:33.810 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:33.810 "is_configured": true, 00:23:33.810 "data_offset": 2048, 00:23:33.810 "data_size": 63488 00:23:33.810 }, 00:23:33.810 { 00:23:33.810 "name": "BaseBdev2", 00:23:33.810 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:33.810 "is_configured": true, 00:23:33.810 "data_offset": 2048, 00:23:33.810 "data_size": 63488 00:23:33.810 } 00:23:33.810 ] 00:23:33.810 }' 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.810 23:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:34.740 [2024-12-09 23:05:09.912940] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:34.740 [2024-12-09 23:05:09.913015] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:34.740 [2024-12-09 23:05:09.913119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.740 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.998 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.999 "name": "raid_bdev1", 00:23:34.999 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:34.999 "strip_size_kb": 0, 00:23:34.999 "state": "online", 00:23:34.999 "raid_level": "raid1", 00:23:34.999 "superblock": true, 00:23:34.999 "num_base_bdevs": 2, 00:23:34.999 "num_base_bdevs_discovered": 2, 00:23:34.999 "num_base_bdevs_operational": 2, 00:23:34.999 "base_bdevs_list": [ 00:23:34.999 { 00:23:34.999 "name": "spare", 00:23:34.999 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:34.999 "is_configured": true, 00:23:34.999 "data_offset": 2048, 00:23:34.999 "data_size": 63488 00:23:34.999 }, 00:23:34.999 { 00:23:34.999 "name": "BaseBdev2", 00:23:34.999 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:34.999 "is_configured": true, 00:23:34.999 "data_offset": 2048, 00:23:34.999 "data_size": 63488 00:23:34.999 } 00:23:34.999 ] 00:23:34.999 }' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.999 "name": "raid_bdev1", 00:23:34.999 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:34.999 "strip_size_kb": 0, 00:23:34.999 "state": "online", 00:23:34.999 "raid_level": "raid1", 00:23:34.999 "superblock": true, 00:23:34.999 "num_base_bdevs": 2, 00:23:34.999 "num_base_bdevs_discovered": 2, 00:23:34.999 "num_base_bdevs_operational": 2, 00:23:34.999 "base_bdevs_list": [ 00:23:34.999 { 00:23:34.999 "name": "spare", 00:23:34.999 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:34.999 "is_configured": true, 00:23:34.999 "data_offset": 2048, 00:23:34.999 "data_size": 63488 00:23:34.999 }, 00:23:34.999 { 00:23:34.999 "name": "BaseBdev2", 00:23:34.999 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:34.999 "is_configured": true, 00:23:34.999 "data_offset": 2048, 00:23:34.999 "data_size": 63488 00:23:34.999 } 00:23:34.999 ] 00:23:34.999 }' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.999 "name": "raid_bdev1", 00:23:34.999 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:34.999 "strip_size_kb": 0, 00:23:34.999 "state": "online", 00:23:34.999 "raid_level": "raid1", 00:23:34.999 "superblock": true, 00:23:34.999 "num_base_bdevs": 2, 00:23:34.999 "num_base_bdevs_discovered": 2, 00:23:34.999 "num_base_bdevs_operational": 2, 00:23:34.999 "base_bdevs_list": [ 00:23:34.999 { 00:23:34.999 "name": "spare", 00:23:34.999 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:34.999 "is_configured": true, 00:23:34.999 "data_offset": 2048, 00:23:34.999 "data_size": 63488 00:23:34.999 }, 00:23:34.999 { 00:23:34.999 "name": "BaseBdev2", 00:23:34.999 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:34.999 "is_configured": true, 00:23:34.999 "data_offset": 2048, 00:23:34.999 "data_size": 63488 00:23:34.999 } 00:23:34.999 ] 00:23:34.999 }' 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.999 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 [2024-12-09 23:05:10.547481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.256 [2024-12-09 23:05:10.547515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.256 [2024-12-09 23:05:10.547579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.256 [2024-12-09 23:05:10.547637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.256 [2024-12-09 23:05:10.547646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:35.256 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:35.257 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:35.513 /dev/nbd0 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.513 1+0 records in 00:23:35.513 1+0 records out 00:23:35.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251425 s, 16.3 MB/s 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:35.513 23:05:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:35.771 /dev/nbd1 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.771 1+0 records in 00:23:35.771 1+0 records out 00:23:35.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316415 s, 12.9 MB/s 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:35.771 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.047 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.316 [2024-12-09 23:05:11.562716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:36.316 [2024-12-09 23:05:11.562769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.316 [2024-12-09 23:05:11.562790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:36.316 [2024-12-09 23:05:11.562798] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.316 [2024-12-09 23:05:11.564668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.316 [2024-12-09 23:05:11.564700] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:36.316 [2024-12-09 23:05:11.564782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:36.316 [2024-12-09 23:05:11.564818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.316 [2024-12-09 23:05:11.564926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:36.316 spare 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.316 [2024-12-09 23:05:11.665012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:36.316 [2024-12-09 23:05:11.665056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:36.316 [2024-12-09 23:05:11.665337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:23:36.316 [2024-12-09 23:05:11.665492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:36.316 [2024-12-09 23:05:11.665510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:36.316 [2024-12-09 23:05:11.665654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.316 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.574 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.574 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.574 "name": "raid_bdev1", 00:23:36.574 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:36.574 "strip_size_kb": 0, 00:23:36.574 "state": "online", 00:23:36.574 "raid_level": "raid1", 00:23:36.574 "superblock": true, 00:23:36.574 "num_base_bdevs": 2, 00:23:36.574 "num_base_bdevs_discovered": 2, 00:23:36.574 "num_base_bdevs_operational": 2, 00:23:36.574 "base_bdevs_list": [ 00:23:36.574 { 00:23:36.574 "name": "spare", 00:23:36.574 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:36.574 "is_configured": true, 00:23:36.574 "data_offset": 2048, 00:23:36.574 "data_size": 63488 00:23:36.574 }, 00:23:36.574 { 00:23:36.574 "name": "BaseBdev2", 00:23:36.574 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:36.574 "is_configured": true, 00:23:36.574 "data_offset": 2048, 00:23:36.574 "data_size": 63488 00:23:36.574 } 00:23:36.574 ] 00:23:36.574 }' 00:23:36.574 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.574 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:36.833 "name": "raid_bdev1", 00:23:36.833 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:36.833 "strip_size_kb": 0, 00:23:36.833 "state": "online", 00:23:36.833 "raid_level": "raid1", 00:23:36.833 "superblock": true, 00:23:36.833 "num_base_bdevs": 2, 00:23:36.833 "num_base_bdevs_discovered": 2, 00:23:36.833 "num_base_bdevs_operational": 2, 00:23:36.833 "base_bdevs_list": [ 00:23:36.833 { 00:23:36.833 "name": "spare", 00:23:36.833 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:36.833 "is_configured": true, 00:23:36.833 "data_offset": 2048, 00:23:36.833 "data_size": 63488 00:23:36.833 }, 00:23:36.833 { 00:23:36.833 "name": "BaseBdev2", 00:23:36.833 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:36.833 "is_configured": true, 00:23:36.833 "data_offset": 2048, 00:23:36.833 "data_size": 63488 00:23:36.833 } 00:23:36.833 ] 00:23:36.833 }' 00:23:36.833 23:05:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.833 [2024-12-09 23:05:12.062865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.833 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.834 "name": "raid_bdev1", 00:23:36.834 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:36.834 "strip_size_kb": 0, 00:23:36.834 "state": "online", 00:23:36.834 "raid_level": "raid1", 00:23:36.834 "superblock": true, 00:23:36.834 "num_base_bdevs": 2, 00:23:36.834 "num_base_bdevs_discovered": 1, 00:23:36.834 "num_base_bdevs_operational": 1, 00:23:36.834 "base_bdevs_list": [ 00:23:36.834 { 00:23:36.834 "name": null, 00:23:36.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.834 "is_configured": false, 00:23:36.834 "data_offset": 0, 00:23:36.834 "data_size": 63488 00:23:36.834 }, 00:23:36.834 { 00:23:36.834 "name": "BaseBdev2", 00:23:36.834 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:36.834 "is_configured": true, 00:23:36.834 "data_offset": 2048, 00:23:36.834 "data_size": 63488 00:23:36.834 } 00:23:36.834 ] 00:23:36.834 }' 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.834 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.094 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:37.094 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.094 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.094 [2024-12-09 23:05:12.338983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.094 [2024-12-09 23:05:12.339212] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:37.094 [2024-12-09 23:05:12.339276] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:37.094 [2024-12-09 23:05:12.339317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:37.094 [2024-12-09 23:05:12.353032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:23:37.094 23:05:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.094 23:05:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:37.094 [2024-12-09 23:05:12.355252] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.028 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.286 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.286 "name": "raid_bdev1", 00:23:38.286 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:38.286 "strip_size_kb": 0, 00:23:38.286 "state": "online", 00:23:38.286 "raid_level": "raid1", 00:23:38.286 "superblock": true, 00:23:38.286 "num_base_bdevs": 2, 00:23:38.286 "num_base_bdevs_discovered": 2, 00:23:38.286 "num_base_bdevs_operational": 2, 00:23:38.286 "process": { 00:23:38.286 "type": "rebuild", 00:23:38.286 "target": "spare", 00:23:38.286 "progress": { 00:23:38.286 "blocks": 20480, 00:23:38.286 "percent": 32 00:23:38.286 } 00:23:38.286 }, 00:23:38.286 "base_bdevs_list": [ 00:23:38.286 { 00:23:38.287 "name": "spare", 00:23:38.287 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:38.287 "is_configured": true, 00:23:38.287 "data_offset": 2048, 00:23:38.287 "data_size": 63488 00:23:38.287 }, 00:23:38.287 { 00:23:38.287 "name": "BaseBdev2", 00:23:38.287 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:38.287 "is_configured": true, 00:23:38.287 "data_offset": 2048, 00:23:38.287 "data_size": 63488 00:23:38.287 } 00:23:38.287 ] 00:23:38.287 }' 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.287 [2024-12-09 23:05:13.452465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.287 [2024-12-09 23:05:13.460394] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:38.287 [2024-12-09 23:05:13.460453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.287 [2024-12-09 23:05:13.460465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.287 [2024-12-09 23:05:13.460474] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.287 "name": "raid_bdev1", 00:23:38.287 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:38.287 "strip_size_kb": 0, 00:23:38.287 "state": "online", 00:23:38.287 "raid_level": "raid1", 00:23:38.287 "superblock": true, 00:23:38.287 "num_base_bdevs": 2, 00:23:38.287 "num_base_bdevs_discovered": 1, 00:23:38.287 "num_base_bdevs_operational": 1, 00:23:38.287 "base_bdevs_list": [ 00:23:38.287 { 00:23:38.287 "name": null, 00:23:38.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.287 "is_configured": false, 00:23:38.287 "data_offset": 0, 00:23:38.287 "data_size": 63488 00:23:38.287 }, 00:23:38.287 { 00:23:38.287 "name": "BaseBdev2", 00:23:38.287 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:38.287 "is_configured": true, 00:23:38.287 "data_offset": 2048, 00:23:38.287 "data_size": 63488 00:23:38.287 } 00:23:38.287 ] 00:23:38.287 }' 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.287 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.548 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:38.548 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.548 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.548 [2024-12-09 23:05:13.788837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:38.548 [2024-12-09 23:05:13.788897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.548 [2024-12-09 23:05:13.788915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:38.548 [2024-12-09 23:05:13.788925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.548 [2024-12-09 23:05:13.789312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.548 [2024-12-09 23:05:13.789335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:38.548 [2024-12-09 23:05:13.789413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:38.548 [2024-12-09 23:05:13.789424] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:38.548 [2024-12-09 23:05:13.789434] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:38.548 [2024-12-09 23:05:13.789454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.548 [2024-12-09 23:05:13.798279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:23:38.548 spare 00:23:38.548 23:05:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.548 23:05:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:38.548 [2024-12-09 23:05:13.799856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.482 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:39.482 "name": "raid_bdev1", 00:23:39.482 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:39.482 "strip_size_kb": 0, 00:23:39.482 "state": "online", 00:23:39.482 "raid_level": "raid1", 00:23:39.482 "superblock": true, 00:23:39.482 "num_base_bdevs": 2, 00:23:39.482 "num_base_bdevs_discovered": 2, 00:23:39.482 "num_base_bdevs_operational": 2, 00:23:39.482 "process": { 00:23:39.482 "type": "rebuild", 00:23:39.482 "target": "spare", 00:23:39.482 "progress": { 00:23:39.482 "blocks": 20480, 00:23:39.482 "percent": 32 00:23:39.482 } 00:23:39.482 }, 00:23:39.482 "base_bdevs_list": [ 00:23:39.482 { 00:23:39.482 "name": "spare", 00:23:39.482 "uuid": "ffbfa40c-06cd-50b8-a5e7-8f6fc56fdee4", 00:23:39.482 "is_configured": true, 00:23:39.482 "data_offset": 2048, 00:23:39.482 "data_size": 63488 00:23:39.482 }, 00:23:39.482 { 00:23:39.482 "name": "BaseBdev2", 00:23:39.482 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:39.482 "is_configured": true, 00:23:39.482 "data_offset": 2048, 00:23:39.482 "data_size": 63488 00:23:39.482 } 00:23:39.482 ] 00:23:39.482 }' 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.740 23:05:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.740 [2024-12-09 23:05:14.914127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.740 [2024-12-09 23:05:15.005215] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:39.740 [2024-12-09 23:05:15.005281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.740 [2024-12-09 23:05:15.005295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:39.740 [2024-12-09 23:05:15.005302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.740 "name": "raid_bdev1", 00:23:39.740 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:39.740 "strip_size_kb": 0, 00:23:39.740 "state": "online", 00:23:39.740 "raid_level": "raid1", 00:23:39.740 "superblock": true, 00:23:39.740 "num_base_bdevs": 2, 00:23:39.740 "num_base_bdevs_discovered": 1, 00:23:39.740 "num_base_bdevs_operational": 1, 00:23:39.740 "base_bdevs_list": [ 00:23:39.740 { 00:23:39.740 "name": null, 00:23:39.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.740 "is_configured": false, 00:23:39.740 "data_offset": 0, 00:23:39.740 "data_size": 63488 00:23:39.740 }, 00:23:39.740 { 00:23:39.740 "name": "BaseBdev2", 00:23:39.740 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:39.740 "is_configured": true, 00:23:39.740 "data_offset": 2048, 00:23:39.740 "data_size": 63488 00:23:39.740 } 00:23:39.740 ] 00:23:39.740 }' 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.740 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.997 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.998 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.998 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.998 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.255 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.255 "name": "raid_bdev1", 00:23:40.255 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:40.255 "strip_size_kb": 0, 00:23:40.255 "state": "online", 00:23:40.255 "raid_level": "raid1", 00:23:40.255 "superblock": true, 00:23:40.256 "num_base_bdevs": 2, 00:23:40.256 "num_base_bdevs_discovered": 1, 00:23:40.256 "num_base_bdevs_operational": 1, 00:23:40.256 "base_bdevs_list": [ 00:23:40.256 { 00:23:40.256 "name": null, 00:23:40.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.256 "is_configured": false, 00:23:40.256 "data_offset": 0, 00:23:40.256 "data_size": 63488 00:23:40.256 }, 00:23:40.256 { 00:23:40.256 "name": "BaseBdev2", 00:23:40.256 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:40.256 "is_configured": true, 00:23:40.256 "data_offset": 2048, 00:23:40.256 "data_size": 63488 00:23:40.256 } 00:23:40.256 ] 00:23:40.256 }' 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.256 [2024-12-09 23:05:15.443633] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:40.256 [2024-12-09 23:05:15.443680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.256 [2024-12-09 23:05:15.443701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:40.256 [2024-12-09 23:05:15.443708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.256 [2024-12-09 23:05:15.444047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.256 [2024-12-09 23:05:15.444065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:40.256 [2024-12-09 23:05:15.444138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:40.256 [2024-12-09 23:05:15.444149] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:40.256 [2024-12-09 23:05:15.444158] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:40.256 [2024-12-09 23:05:15.444166] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:40.256 BaseBdev1 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.256 23:05:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.187 "name": "raid_bdev1", 00:23:41.187 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:41.187 "strip_size_kb": 0, 00:23:41.187 "state": "online", 00:23:41.187 "raid_level": "raid1", 00:23:41.187 "superblock": true, 00:23:41.187 "num_base_bdevs": 2, 00:23:41.187 "num_base_bdevs_discovered": 1, 00:23:41.187 "num_base_bdevs_operational": 1, 00:23:41.187 "base_bdevs_list": [ 00:23:41.187 { 00:23:41.187 "name": null, 00:23:41.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.187 "is_configured": false, 00:23:41.187 "data_offset": 0, 00:23:41.187 "data_size": 63488 00:23:41.187 }, 00:23:41.187 { 00:23:41.187 "name": "BaseBdev2", 00:23:41.187 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:41.187 "is_configured": true, 00:23:41.187 "data_offset": 2048, 00:23:41.187 "data_size": 63488 00:23:41.187 } 00:23:41.187 ] 00:23:41.187 }' 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.187 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.444 "name": "raid_bdev1", 00:23:41.444 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:41.444 "strip_size_kb": 0, 00:23:41.444 "state": "online", 00:23:41.444 "raid_level": "raid1", 00:23:41.444 "superblock": true, 00:23:41.444 "num_base_bdevs": 2, 00:23:41.444 "num_base_bdevs_discovered": 1, 00:23:41.444 "num_base_bdevs_operational": 1, 00:23:41.444 "base_bdevs_list": [ 00:23:41.444 { 00:23:41.444 "name": null, 00:23:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.444 "is_configured": false, 00:23:41.444 "data_offset": 0, 00:23:41.444 "data_size": 63488 00:23:41.444 }, 00:23:41.444 { 00:23:41.444 "name": "BaseBdev2", 00:23:41.444 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:41.444 "is_configured": true, 00:23:41.444 "data_offset": 2048, 00:23:41.444 "data_size": 63488 00:23:41.444 } 00:23:41.444 ] 00:23:41.444 }' 00:23:41.444 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.702 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:41.702 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.702 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:41.702 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:41.702 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:23:41.702 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.703 [2024-12-09 23:05:16.871938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:41.703 [2024-12-09 23:05:16.872065] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:41.703 [2024-12-09 23:05:16.872079] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:41.703 request: 00:23:41.703 { 00:23:41.703 "base_bdev": "BaseBdev1", 00:23:41.703 "raid_bdev": "raid_bdev1", 00:23:41.703 "method": "bdev_raid_add_base_bdev", 00:23:41.703 "req_id": 1 00:23:41.703 } 00:23:41.703 Got JSON-RPC error response 00:23:41.703 response: 00:23:41.703 { 00:23:41.703 "code": -22, 00:23:41.703 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:41.703 } 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.703 23:05:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.634 "name": "raid_bdev1", 00:23:42.634 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:42.634 "strip_size_kb": 0, 00:23:42.634 "state": "online", 00:23:42.634 "raid_level": "raid1", 00:23:42.634 "superblock": true, 00:23:42.634 "num_base_bdevs": 2, 00:23:42.634 "num_base_bdevs_discovered": 1, 00:23:42.634 "num_base_bdevs_operational": 1, 00:23:42.634 "base_bdevs_list": [ 00:23:42.634 { 00:23:42.634 "name": null, 00:23:42.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.634 "is_configured": false, 00:23:42.634 "data_offset": 0, 00:23:42.634 "data_size": 63488 00:23:42.634 }, 00:23:42.634 { 00:23:42.634 "name": "BaseBdev2", 00:23:42.634 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:42.634 "is_configured": true, 00:23:42.634 "data_offset": 2048, 00:23:42.634 "data_size": 63488 00:23:42.634 } 00:23:42.634 ] 00:23:42.634 }' 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.634 23:05:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.892 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.149 "name": "raid_bdev1", 00:23:43.149 "uuid": "ab382179-d81f-48c8-9711-361accf51c28", 00:23:43.149 "strip_size_kb": 0, 00:23:43.149 "state": "online", 00:23:43.149 "raid_level": "raid1", 00:23:43.149 "superblock": true, 00:23:43.149 "num_base_bdevs": 2, 00:23:43.149 "num_base_bdevs_discovered": 1, 00:23:43.149 "num_base_bdevs_operational": 1, 00:23:43.149 "base_bdevs_list": [ 00:23:43.149 { 00:23:43.149 "name": null, 00:23:43.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.149 "is_configured": false, 00:23:43.149 "data_offset": 0, 00:23:43.149 "data_size": 63488 00:23:43.149 }, 00:23:43.149 { 00:23:43.149 "name": "BaseBdev2", 00:23:43.149 "uuid": "6247e9b3-6b62-5b84-beb6-b2486fd4b42f", 00:23:43.149 "is_configured": true, 00:23:43.149 "data_offset": 2048, 00:23:43.149 "data_size": 63488 00:23:43.149 } 00:23:43.149 ] 00:23:43.149 }' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73736 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73736 ']' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 73736 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73736 00:23:43.149 killing process with pid 73736 00:23:43.149 Received shutdown signal, test time was about 60.000000 seconds 00:23:43.149 00:23:43.149 Latency(us) 00:23:43.149 [2024-12-09T23:05:18.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.149 [2024-12-09T23:05:18.512Z] =================================================================================================================== 00:23:43.149 [2024-12-09T23:05:18.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73736' 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 73736 00:23:43.149 [2024-12-09 23:05:18.365000] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:43.149 23:05:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 73736 00:23:43.149 [2024-12-09 23:05:18.365091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.149 [2024-12-09 23:05:18.365140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.149 [2024-12-09 23:05:18.365150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:43.149 [2024-12-09 23:05:18.509018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.715 ************************************ 00:23:43.715 END TEST raid_rebuild_test_sb 00:23:43.715 ************************************ 00:23:43.715 23:05:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:43.715 00:23:43.715 real 0m20.428s 00:23:43.715 user 0m23.913s 00:23:43.715 sys 0m2.937s 00:23:43.715 23:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.715 23:05:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.973 23:05:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:23:43.973 23:05:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:43.973 23:05:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.973 23:05:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:43.973 ************************************ 00:23:43.973 START TEST raid_rebuild_test_io 00:23:43.973 ************************************ 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:43.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74437 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74437 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 74437 ']' 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.973 23:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:43.973 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:43.973 Zero copy mechanism will not be used. 00:23:43.973 [2024-12-09 23:05:19.183704] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:23:43.973 [2024-12-09 23:05:19.183828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74437 ] 00:23:44.230 [2024-12-09 23:05:19.338755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.230 [2024-12-09 23:05:19.423271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.230 [2024-12-09 23:05:19.532863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.230 [2024-12-09 23:05:19.532890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.828 BaseBdev1_malloc 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.828 [2024-12-09 23:05:20.056477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:44.828 [2024-12-09 23:05:20.056688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.828 [2024-12-09 23:05:20.056712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:44.828 [2024-12-09 23:05:20.056721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.828 [2024-12-09 23:05:20.058514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.828 [2024-12-09 23:05:20.058544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:44.828 BaseBdev1 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.828 BaseBdev2_malloc 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.828 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.828 [2024-12-09 23:05:20.088056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:44.828 [2024-12-09 23:05:20.088214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.829 [2024-12-09 23:05:20.088238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:44.829 [2024-12-09 23:05:20.088246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.829 [2024-12-09 23:05:20.090016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.829 [2024-12-09 23:05:20.090047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:44.829 BaseBdev2 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.829 spare_malloc 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.829 spare_delay 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.829 [2024-12-09 23:05:20.141804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:44.829 [2024-12-09 23:05:20.141857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.829 [2024-12-09 23:05:20.141875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:44.829 [2024-12-09 23:05:20.141884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.829 [2024-12-09 23:05:20.143672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.829 [2024-12-09 23:05:20.143704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:44.829 spare 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.829 [2024-12-09 23:05:20.149850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:44.829 [2024-12-09 23:05:20.151374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:44.829 [2024-12-09 23:05:20.151572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:44.829 [2024-12-09 23:05:20.151589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:44.829 [2024-12-09 23:05:20.151805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:44.829 [2024-12-09 23:05:20.151925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:44.829 [2024-12-09 23:05:20.151933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:44.829 [2024-12-09 23:05:20.152046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.829 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.093 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.093 "name": "raid_bdev1", 00:23:45.093 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:45.093 "strip_size_kb": 0, 00:23:45.093 "state": "online", 00:23:45.093 "raid_level": "raid1", 00:23:45.093 "superblock": false, 00:23:45.093 "num_base_bdevs": 2, 00:23:45.093 "num_base_bdevs_discovered": 2, 00:23:45.093 "num_base_bdevs_operational": 2, 00:23:45.093 "base_bdevs_list": [ 00:23:45.093 { 00:23:45.093 "name": "BaseBdev1", 00:23:45.093 "uuid": "e09403f6-9b74-5e02-b8b2-751d418b83af", 00:23:45.093 "is_configured": true, 00:23:45.093 "data_offset": 0, 00:23:45.093 "data_size": 65536 00:23:45.093 }, 00:23:45.093 { 00:23:45.093 "name": "BaseBdev2", 00:23:45.093 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:45.093 "is_configured": true, 00:23:45.093 "data_offset": 0, 00:23:45.093 "data_size": 65536 00:23:45.093 } 00:23:45.093 ] 00:23:45.093 }' 00:23:45.093 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.093 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.352 [2024-12-09 23:05:20.498161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.352 [2024-12-09 23:05:20.557905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.352 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.353 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.353 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.353 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.353 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.353 "name": "raid_bdev1", 00:23:45.353 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:45.353 "strip_size_kb": 0, 00:23:45.353 "state": "online", 00:23:45.353 "raid_level": "raid1", 00:23:45.353 "superblock": false, 00:23:45.353 "num_base_bdevs": 2, 00:23:45.353 "num_base_bdevs_discovered": 1, 00:23:45.353 "num_base_bdevs_operational": 1, 00:23:45.353 "base_bdevs_list": [ 00:23:45.353 { 00:23:45.353 "name": null, 00:23:45.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.353 "is_configured": false, 00:23:45.353 "data_offset": 0, 00:23:45.353 "data_size": 65536 00:23:45.353 }, 00:23:45.353 { 00:23:45.353 "name": "BaseBdev2", 00:23:45.353 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:45.353 "is_configured": true, 00:23:45.353 "data_offset": 0, 00:23:45.353 "data_size": 65536 00:23:45.353 } 00:23:45.353 ] 00:23:45.353 }' 00:23:45.353 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.353 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.353 [2024-12-09 23:05:20.642383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:45.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:45.353 Zero copy mechanism will not be used. 00:23:45.353 Running I/O for 60 seconds... 00:23:45.611 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:45.611 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.611 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.611 [2024-12-09 23:05:20.867673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:45.611 23:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.611 23:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:45.611 [2024-12-09 23:05:20.922663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:45.611 [2024-12-09 23:05:20.924384] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:45.868 [2024-12-09 23:05:21.046778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:45.868 [2024-12-09 23:05:21.047173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:46.125 [2024-12-09 23:05:21.272283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:46.125 [2024-12-09 23:05:21.272502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:46.383 [2024-12-09 23:05:21.529858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:46.384 167.00 IOPS, 501.00 MiB/s [2024-12-09T23:05:21.747Z] [2024-12-09 23:05:21.742217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:46.384 [2024-12-09 23:05:21.742440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.642 "name": "raid_bdev1", 00:23:46.642 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:46.642 "strip_size_kb": 0, 00:23:46.642 "state": "online", 00:23:46.642 "raid_level": "raid1", 00:23:46.642 "superblock": false, 00:23:46.642 "num_base_bdevs": 2, 00:23:46.642 "num_base_bdevs_discovered": 2, 00:23:46.642 "num_base_bdevs_operational": 2, 00:23:46.642 "process": { 00:23:46.642 "type": "rebuild", 00:23:46.642 "target": "spare", 00:23:46.642 "progress": { 00:23:46.642 "blocks": 12288, 00:23:46.642 "percent": 18 00:23:46.642 } 00:23:46.642 }, 00:23:46.642 "base_bdevs_list": [ 00:23:46.642 { 00:23:46.642 "name": "spare", 00:23:46.642 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:46.642 "is_configured": true, 00:23:46.642 "data_offset": 0, 00:23:46.642 "data_size": 65536 00:23:46.642 }, 00:23:46.642 { 00:23:46.642 "name": "BaseBdev2", 00:23:46.642 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:46.642 "is_configured": true, 00:23:46.642 "data_offset": 0, 00:23:46.642 "data_size": 65536 00:23:46.642 } 00:23:46.642 ] 00:23:46.642 }' 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.642 23:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.900 [2024-12-09 23:05:22.006473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:46.900 [2024-12-09 23:05:22.149420] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:46.900 [2024-12-09 23:05:22.156706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.900 [2024-12-09 23:05:22.156817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:46.900 [2024-12-09 23:05:22.156831] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:46.900 [2024-12-09 23:05:22.178466] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.900 "name": "raid_bdev1", 00:23:46.900 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:46.900 "strip_size_kb": 0, 00:23:46.900 "state": "online", 00:23:46.900 "raid_level": "raid1", 00:23:46.900 "superblock": false, 00:23:46.900 "num_base_bdevs": 2, 00:23:46.900 "num_base_bdevs_discovered": 1, 00:23:46.900 "num_base_bdevs_operational": 1, 00:23:46.900 "base_bdevs_list": [ 00:23:46.900 { 00:23:46.900 "name": null, 00:23:46.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.900 "is_configured": false, 00:23:46.900 "data_offset": 0, 00:23:46.900 "data_size": 65536 00:23:46.900 }, 00:23:46.900 { 00:23:46.900 "name": "BaseBdev2", 00:23:46.900 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:46.900 "is_configured": true, 00:23:46.900 "data_offset": 0, 00:23:46.900 "data_size": 65536 00:23:46.900 } 00:23:46.900 ] 00:23:46.900 }' 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.900 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.156 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.414 "name": "raid_bdev1", 00:23:47.414 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:47.414 "strip_size_kb": 0, 00:23:47.414 "state": "online", 00:23:47.414 "raid_level": "raid1", 00:23:47.414 "superblock": false, 00:23:47.414 "num_base_bdevs": 2, 00:23:47.414 "num_base_bdevs_discovered": 1, 00:23:47.414 "num_base_bdevs_operational": 1, 00:23:47.414 "base_bdevs_list": [ 00:23:47.414 { 00:23:47.414 "name": null, 00:23:47.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.414 "is_configured": false, 00:23:47.414 "data_offset": 0, 00:23:47.414 "data_size": 65536 00:23:47.414 }, 00:23:47.414 { 00:23:47.414 "name": "BaseBdev2", 00:23:47.414 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:47.414 "is_configured": true, 00:23:47.414 "data_offset": 0, 00:23:47.414 "data_size": 65536 00:23:47.414 } 00:23:47.414 ] 00:23:47.414 }' 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.414 [2024-12-09 23:05:22.609857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.414 23:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:47.414 [2024-12-09 23:05:22.638707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:47.414 [2024-12-09 23:05:22.640353] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.414 197.00 IOPS, 591.00 MiB/s [2024-12-09T23:05:22.777Z] [2024-12-09 23:05:22.752260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:47.414 [2024-12-09 23:05:22.752826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:47.726 [2024-12-09 23:05:22.966661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:47.726 [2024-12-09 23:05:22.967006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:48.292 [2024-12-09 23:05:23.408456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:48.292 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.293 23:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.552 159.00 IOPS, 477.00 MiB/s [2024-12-09T23:05:23.915Z] 23:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.552 "name": "raid_bdev1", 00:23:48.552 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:48.552 "strip_size_kb": 0, 00:23:48.552 "state": "online", 00:23:48.552 "raid_level": "raid1", 00:23:48.552 "superblock": false, 00:23:48.552 "num_base_bdevs": 2, 00:23:48.552 "num_base_bdevs_discovered": 2, 00:23:48.552 "num_base_bdevs_operational": 2, 00:23:48.552 "process": { 00:23:48.552 "type": "rebuild", 00:23:48.552 "target": "spare", 00:23:48.552 "progress": { 00:23:48.552 "blocks": 12288, 00:23:48.552 "percent": 18 00:23:48.552 } 00:23:48.552 }, 00:23:48.552 "base_bdevs_list": [ 00:23:48.552 { 00:23:48.552 "name": "spare", 00:23:48.552 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:48.552 "is_configured": true, 00:23:48.552 "data_offset": 0, 00:23:48.552 "data_size": 65536 00:23:48.552 }, 00:23:48.552 { 00:23:48.552 "name": "BaseBdev2", 00:23:48.552 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:48.552 "is_configured": true, 00:23:48.552 "data_offset": 0, 00:23:48.552 "data_size": 65536 00:23:48.552 } 00:23:48.552 ] 00:23:48.552 }' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=331 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.552 "name": "raid_bdev1", 00:23:48.552 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:48.552 "strip_size_kb": 0, 00:23:48.552 "state": "online", 00:23:48.552 "raid_level": "raid1", 00:23:48.552 "superblock": false, 00:23:48.552 "num_base_bdevs": 2, 00:23:48.552 "num_base_bdevs_discovered": 2, 00:23:48.552 "num_base_bdevs_operational": 2, 00:23:48.552 "process": { 00:23:48.552 "type": "rebuild", 00:23:48.552 "target": "spare", 00:23:48.552 "progress": { 00:23:48.552 "blocks": 14336, 00:23:48.552 "percent": 21 00:23:48.552 } 00:23:48.552 }, 00:23:48.552 "base_bdevs_list": [ 00:23:48.552 { 00:23:48.552 "name": "spare", 00:23:48.552 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:48.552 "is_configured": true, 00:23:48.552 "data_offset": 0, 00:23:48.552 "data_size": 65536 00:23:48.552 }, 00:23:48.552 { 00:23:48.552 "name": "BaseBdev2", 00:23:48.552 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:48.552 "is_configured": true, 00:23:48.552 "data_offset": 0, 00:23:48.552 "data_size": 65536 00:23:48.552 } 00:23:48.552 ] 00:23:48.552 }' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.552 23:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:48.810 [2024-12-09 23:05:23.994902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:49.377 [2024-12-09 23:05:24.638765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:49.637 139.25 IOPS, 417.75 MiB/s [2024-12-09T23:05:25.000Z] [2024-12-09 23:05:24.755798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.637 "name": "raid_bdev1", 00:23:49.637 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:49.637 "strip_size_kb": 0, 00:23:49.637 "state": "online", 00:23:49.637 "raid_level": "raid1", 00:23:49.637 "superblock": false, 00:23:49.637 "num_base_bdevs": 2, 00:23:49.637 "num_base_bdevs_discovered": 2, 00:23:49.637 "num_base_bdevs_operational": 2, 00:23:49.637 "process": { 00:23:49.637 "type": "rebuild", 00:23:49.637 "target": "spare", 00:23:49.637 "progress": { 00:23:49.637 "blocks": 34816, 00:23:49.637 "percent": 53 00:23:49.637 } 00:23:49.637 }, 00:23:49.637 "base_bdevs_list": [ 00:23:49.637 { 00:23:49.637 "name": "spare", 00:23:49.637 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:49.637 "is_configured": true, 00:23:49.637 "data_offset": 0, 00:23:49.637 "data_size": 65536 00:23:49.637 }, 00:23:49.637 { 00:23:49.637 "name": "BaseBdev2", 00:23:49.637 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:49.637 "is_configured": true, 00:23:49.637 "data_offset": 0, 00:23:49.637 "data_size": 65536 00:23:49.637 } 00:23:49.637 ] 00:23:49.637 }' 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.637 23:05:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:50.207 [2024-12-09 23:05:25.274035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:50.723 121.80 IOPS, 365.40 MiB/s [2024-12-09T23:05:26.086Z] 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.723 "name": "raid_bdev1", 00:23:50.723 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:50.723 "strip_size_kb": 0, 00:23:50.723 "state": "online", 00:23:50.723 "raid_level": "raid1", 00:23:50.723 "superblock": false, 00:23:50.723 "num_base_bdevs": 2, 00:23:50.723 "num_base_bdevs_discovered": 2, 00:23:50.723 "num_base_bdevs_operational": 2, 00:23:50.723 "process": { 00:23:50.723 "type": "rebuild", 00:23:50.723 "target": "spare", 00:23:50.723 "progress": { 00:23:50.723 "blocks": 55296, 00:23:50.723 "percent": 84 00:23:50.723 } 00:23:50.723 }, 00:23:50.723 "base_bdevs_list": [ 00:23:50.723 { 00:23:50.723 "name": "spare", 00:23:50.723 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:50.723 "is_configured": true, 00:23:50.723 "data_offset": 0, 00:23:50.723 "data_size": 65536 00:23:50.723 }, 00:23:50.723 { 00:23:50.723 "name": "BaseBdev2", 00:23:50.723 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:50.723 "is_configured": true, 00:23:50.723 "data_offset": 0, 00:23:50.723 "data_size": 65536 00:23:50.723 } 00:23:50.723 ] 00:23:50.723 }' 00:23:50.723 23:05:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.724 [2024-12-09 23:05:26.000787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:50.724 23:05:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.724 23:05:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.724 23:05:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.724 23:05:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:51.290 [2024-12-09 23:05:26.446952] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:51.290 [2024-12-09 23:05:26.552074] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:51.290 [2024-12-09 23:05:26.553757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.806 107.00 IOPS, 321.00 MiB/s [2024-12-09T23:05:27.169Z] 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.806 "name": "raid_bdev1", 00:23:51.806 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:51.806 "strip_size_kb": 0, 00:23:51.806 "state": "online", 00:23:51.806 "raid_level": "raid1", 00:23:51.806 "superblock": false, 00:23:51.806 "num_base_bdevs": 2, 00:23:51.806 "num_base_bdevs_discovered": 2, 00:23:51.806 "num_base_bdevs_operational": 2, 00:23:51.806 "base_bdevs_list": [ 00:23:51.806 { 00:23:51.806 "name": "spare", 00:23:51.806 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:51.806 "is_configured": true, 00:23:51.806 "data_offset": 0, 00:23:51.806 "data_size": 65536 00:23:51.806 }, 00:23:51.806 { 00:23:51.806 "name": "BaseBdev2", 00:23:51.806 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:51.806 "is_configured": true, 00:23:51.806 "data_offset": 0, 00:23:51.806 "data_size": 65536 00:23:51.806 } 00:23:51.806 ] 00:23:51.806 }' 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:51.806 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:52.065 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.065 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.065 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.065 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.065 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.065 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.065 "name": "raid_bdev1", 00:23:52.065 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:52.065 "strip_size_kb": 0, 00:23:52.065 "state": "online", 00:23:52.065 "raid_level": "raid1", 00:23:52.065 "superblock": false, 00:23:52.065 "num_base_bdevs": 2, 00:23:52.065 "num_base_bdevs_discovered": 2, 00:23:52.065 "num_base_bdevs_operational": 2, 00:23:52.065 "base_bdevs_list": [ 00:23:52.065 { 00:23:52.065 "name": "spare", 00:23:52.065 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:52.066 "is_configured": true, 00:23:52.066 "data_offset": 0, 00:23:52.066 "data_size": 65536 00:23:52.066 }, 00:23:52.066 { 00:23:52.066 "name": "BaseBdev2", 00:23:52.066 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:52.066 "is_configured": true, 00:23:52.066 "data_offset": 0, 00:23:52.066 "data_size": 65536 00:23:52.066 } 00:23:52.066 ] 00:23:52.066 }' 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.066 "name": "raid_bdev1", 00:23:52.066 "uuid": "7cc7690c-97fd-40a7-a8f3-c9d8002e250b", 00:23:52.066 "strip_size_kb": 0, 00:23:52.066 "state": "online", 00:23:52.066 "raid_level": "raid1", 00:23:52.066 "superblock": false, 00:23:52.066 "num_base_bdevs": 2, 00:23:52.066 "num_base_bdevs_discovered": 2, 00:23:52.066 "num_base_bdevs_operational": 2, 00:23:52.066 "base_bdevs_list": [ 00:23:52.066 { 00:23:52.066 "name": "spare", 00:23:52.066 "uuid": "3534f720-34f4-54c8-aa99-7dd577bdac00", 00:23:52.066 "is_configured": true, 00:23:52.066 "data_offset": 0, 00:23:52.066 "data_size": 65536 00:23:52.066 }, 00:23:52.066 { 00:23:52.066 "name": "BaseBdev2", 00:23:52.066 "uuid": "fe178632-fd64-503c-9f91-882bf8f18ecd", 00:23:52.066 "is_configured": true, 00:23:52.066 "data_offset": 0, 00:23:52.066 "data_size": 65536 00:23:52.066 } 00:23:52.066 ] 00:23:52.066 }' 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.066 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.324 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:52.324 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.324 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.324 [2024-12-09 23:05:27.542691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.324 [2024-12-09 23:05:27.542715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.324 00:23:52.324 Latency(us) 00:23:52.324 [2024-12-09T23:05:27.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.324 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:52.324 raid_bdev1 : 6.99 97.27 291.82 0.00 0.00 14736.28 269.39 108890.58 00:23:52.324 [2024-12-09T23:05:27.687Z] =================================================================================================================== 00:23:52.324 [2024-12-09T23:05:27.687Z] Total : 97.27 291.82 0.00 0.00 14736.28 269.39 108890.58 00:23:52.324 [2024-12-09 23:05:27.646667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.324 [2024-12-09 23:05:27.646801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.324 [2024-12-09 23:05:27.646883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.324 [2024-12-09 23:05:27.646945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:52.324 { 00:23:52.324 "results": [ 00:23:52.324 { 00:23:52.324 "job": "raid_bdev1", 00:23:52.324 "core_mask": "0x1", 00:23:52.324 "workload": "randrw", 00:23:52.324 "percentage": 50, 00:23:52.324 "status": "finished", 00:23:52.324 "queue_depth": 2, 00:23:52.324 "io_size": 3145728, 00:23:52.324 "runtime": 6.990601, 00:23:52.324 "iops": 97.27346761744806, 00:23:52.324 "mibps": 291.82040285234416, 00:23:52.324 "io_failed": 0, 00:23:52.324 "io_timeout": 0, 00:23:52.324 "avg_latency_us": 14736.280904977377, 00:23:52.324 "min_latency_us": 269.39076923076925, 00:23:52.324 "max_latency_us": 108890.58461538462 00:23:52.324 } 00:23:52.324 ], 00:23:52.325 "core_count": 1 00:23:52.325 } 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:52.325 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:23:52.582 /dev/nbd0 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:52.839 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:52.839 1+0 records in 00:23:52.839 1+0 records out 00:23:52.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226045 s, 18.1 MB/s 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:52.840 23:05:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:52.840 /dev/nbd1 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:52.840 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:53.098 1+0 records in 00:23:53.098 1+0 records out 00:23:53.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225167 s, 18.2 MB/s 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:53.098 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:53.356 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74437 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 74437 ']' 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 74437 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74437 00:23:53.615 killing process with pid 74437 00:23:53.615 Received shutdown signal, test time was about 8.124961 seconds 00:23:53.615 00:23:53.615 Latency(us) 00:23:53.615 [2024-12-09T23:05:28.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.615 [2024-12-09T23:05:28.978Z] =================================================================================================================== 00:23:53.615 [2024-12-09T23:05:28.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74437' 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 74437 00:23:53.615 [2024-12-09 23:05:28.769016] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:53.615 23:05:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 74437 00:23:53.615 [2024-12-09 23:05:28.880776] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:23:54.185 00:23:54.185 real 0m10.356s 00:23:54.185 user 0m12.913s 00:23:54.185 sys 0m0.984s 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.185 ************************************ 00:23:54.185 END TEST raid_rebuild_test_io 00:23:54.185 ************************************ 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:54.185 23:05:29 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:23:54.185 23:05:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:54.185 23:05:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.185 23:05:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:54.185 ************************************ 00:23:54.185 START TEST raid_rebuild_test_sb_io 00:23:54.185 ************************************ 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74798 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74798 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 74798 ']' 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:54.185 23:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:54.443 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:54.443 Zero copy mechanism will not be used. 00:23:54.443 [2024-12-09 23:05:29.570061] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:23:54.443 [2024-12-09 23:05:29.570170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74798 ] 00:23:54.443 [2024-12-09 23:05:29.718885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.443 [2024-12-09 23:05:29.801961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.701 [2024-12-09 23:05:29.911083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.701 [2024-12-09 23:05:29.911119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 BaseBdev1_malloc 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 [2024-12-09 23:05:30.448411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:55.286 [2024-12-09 23:05:30.448462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.286 [2024-12-09 23:05:30.448479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:55.286 [2024-12-09 23:05:30.448488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.286 [2024-12-09 23:05:30.450238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.286 [2024-12-09 23:05:30.450269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:55.286 BaseBdev1 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 BaseBdev2_malloc 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 [2024-12-09 23:05:30.483832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:55.286 [2024-12-09 23:05:30.483884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.286 [2024-12-09 23:05:30.483901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:55.286 [2024-12-09 23:05:30.483909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.286 [2024-12-09 23:05:30.485660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.286 [2024-12-09 23:05:30.485785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:55.286 BaseBdev2 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 spare_malloc 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 spare_delay 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 [2024-12-09 23:05:30.540828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:55.286 [2024-12-09 23:05:30.540878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.286 [2024-12-09 23:05:30.540892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:55.286 [2024-12-09 23:05:30.540900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.286 [2024-12-09 23:05:30.542631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.286 [2024-12-09 23:05:30.542663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:55.286 spare 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 [2024-12-09 23:05:30.548876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.286 [2024-12-09 23:05:30.550357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:55.286 [2024-12-09 23:05:30.550492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:55.286 [2024-12-09 23:05:30.550503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:55.286 [2024-12-09 23:05:30.550712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:55.286 [2024-12-09 23:05:30.550832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:55.286 [2024-12-09 23:05:30.550839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:55.286 [2024-12-09 23:05:30.550950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.286 "name": "raid_bdev1", 00:23:55.286 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:55.286 "strip_size_kb": 0, 00:23:55.286 "state": "online", 00:23:55.286 "raid_level": "raid1", 00:23:55.286 "superblock": true, 00:23:55.286 "num_base_bdevs": 2, 00:23:55.286 "num_base_bdevs_discovered": 2, 00:23:55.286 "num_base_bdevs_operational": 2, 00:23:55.286 "base_bdevs_list": [ 00:23:55.286 { 00:23:55.286 "name": "BaseBdev1", 00:23:55.286 "uuid": "7de0e1c7-c10d-5506-9471-990634264950", 00:23:55.286 "is_configured": true, 00:23:55.286 "data_offset": 2048, 00:23:55.286 "data_size": 63488 00:23:55.286 }, 00:23:55.286 { 00:23:55.286 "name": "BaseBdev2", 00:23:55.286 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:55.286 "is_configured": true, 00:23:55.286 "data_offset": 2048, 00:23:55.286 "data_size": 63488 00:23:55.286 } 00:23:55.286 ] 00:23:55.286 }' 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.286 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.544 [2024-12-09 23:05:30.869200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.544 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.803 [2024-12-09 23:05:30.932925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.803 "name": "raid_bdev1", 00:23:55.803 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:55.803 "strip_size_kb": 0, 00:23:55.803 "state": "online", 00:23:55.803 "raid_level": "raid1", 00:23:55.803 "superblock": true, 00:23:55.803 "num_base_bdevs": 2, 00:23:55.803 "num_base_bdevs_discovered": 1, 00:23:55.803 "num_base_bdevs_operational": 1, 00:23:55.803 "base_bdevs_list": [ 00:23:55.803 { 00:23:55.803 "name": null, 00:23:55.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.803 "is_configured": false, 00:23:55.803 "data_offset": 0, 00:23:55.803 "data_size": 63488 00:23:55.803 }, 00:23:55.803 { 00:23:55.803 "name": "BaseBdev2", 00:23:55.803 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:55.803 "is_configured": true, 00:23:55.803 "data_offset": 2048, 00:23:55.803 "data_size": 63488 00:23:55.803 } 00:23:55.803 ] 00:23:55.803 }' 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.803 23:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.803 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:55.803 Zero copy mechanism will not be used. 00:23:55.803 Running I/O for 60 seconds... 00:23:55.803 [2024-12-09 23:05:31.017377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:56.061 23:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:56.061 23:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.061 23:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:56.061 [2024-12-09 23:05:31.241878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:56.061 23:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.061 23:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:56.061 [2024-12-09 23:05:31.291088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:56.061 [2024-12-09 23:05:31.292698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:56.061 [2024-12-09 23:05:31.402194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:56.319 [2024-12-09 23:05:31.515010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:56.319 [2024-12-09 23:05:31.515262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:56.577 [2024-12-09 23:05:31.852195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:57.094 183.00 IOPS, 549.00 MiB/s [2024-12-09T23:05:32.457Z] [2024-12-09 23:05:32.274758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:57.094 "name": "raid_bdev1", 00:23:57.094 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:57.094 "strip_size_kb": 0, 00:23:57.094 "state": "online", 00:23:57.094 "raid_level": "raid1", 00:23:57.094 "superblock": true, 00:23:57.094 "num_base_bdevs": 2, 00:23:57.094 "num_base_bdevs_discovered": 2, 00:23:57.094 "num_base_bdevs_operational": 2, 00:23:57.094 "process": { 00:23:57.094 "type": "rebuild", 00:23:57.094 "target": "spare", 00:23:57.094 "progress": { 00:23:57.094 "blocks": 14336, 00:23:57.094 "percent": 22 00:23:57.094 } 00:23:57.094 }, 00:23:57.094 "base_bdevs_list": [ 00:23:57.094 { 00:23:57.094 "name": "spare", 00:23:57.094 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:23:57.094 "is_configured": true, 00:23:57.094 "data_offset": 2048, 00:23:57.094 "data_size": 63488 00:23:57.094 }, 00:23:57.094 { 00:23:57.094 "name": "BaseBdev2", 00:23:57.094 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:57.094 "is_configured": true, 00:23:57.094 "data_offset": 2048, 00:23:57.094 "data_size": 63488 00:23:57.094 } 00:23:57.094 ] 00:23:57.094 }' 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.094 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:57.094 [2024-12-09 23:05:32.375799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:57.094 [2024-12-09 23:05:32.388642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:57.358 [2024-12-09 23:05:32.493819] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:57.358 [2024-12-09 23:05:32.500909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.359 [2024-12-09 23:05:32.500937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:57.359 [2024-12-09 23:05:32.500948] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:57.359 [2024-12-09 23:05:32.517668] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.359 "name": "raid_bdev1", 00:23:57.359 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:57.359 "strip_size_kb": 0, 00:23:57.359 "state": "online", 00:23:57.359 "raid_level": "raid1", 00:23:57.359 "superblock": true, 00:23:57.359 "num_base_bdevs": 2, 00:23:57.359 "num_base_bdevs_discovered": 1, 00:23:57.359 "num_base_bdevs_operational": 1, 00:23:57.359 "base_bdevs_list": [ 00:23:57.359 { 00:23:57.359 "name": null, 00:23:57.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.359 "is_configured": false, 00:23:57.359 "data_offset": 0, 00:23:57.359 "data_size": 63488 00:23:57.359 }, 00:23:57.359 { 00:23:57.359 "name": "BaseBdev2", 00:23:57.359 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:57.359 "is_configured": true, 00:23:57.359 "data_offset": 2048, 00:23:57.359 "data_size": 63488 00:23:57.359 } 00:23:57.359 ] 00:23:57.359 }' 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.359 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:57.617 "name": "raid_bdev1", 00:23:57.617 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:57.617 "strip_size_kb": 0, 00:23:57.617 "state": "online", 00:23:57.617 "raid_level": "raid1", 00:23:57.617 "superblock": true, 00:23:57.617 "num_base_bdevs": 2, 00:23:57.617 "num_base_bdevs_discovered": 1, 00:23:57.617 "num_base_bdevs_operational": 1, 00:23:57.617 "base_bdevs_list": [ 00:23:57.617 { 00:23:57.617 "name": null, 00:23:57.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.617 "is_configured": false, 00:23:57.617 "data_offset": 0, 00:23:57.617 "data_size": 63488 00:23:57.617 }, 00:23:57.617 { 00:23:57.617 "name": "BaseBdev2", 00:23:57.617 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:57.617 "is_configured": true, 00:23:57.617 "data_offset": 2048, 00:23:57.617 "data_size": 63488 00:23:57.617 } 00:23:57.617 ] 00:23:57.617 }' 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:57.617 [2024-12-09 23:05:32.942457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.617 23:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:57.617 [2024-12-09 23:05:32.971141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:57.617 [2024-12-09 23:05:32.972866] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:57.874 190.00 IOPS, 570.00 MiB/s [2024-12-09T23:05:33.237Z] [2024-12-09 23:05:33.095860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:57.874 [2024-12-09 23:05:33.096296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:57.874 [2024-12-09 23:05:33.220645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:57.874 [2024-12-09 23:05:33.220877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:58.440 [2024-12-09 23:05:33.677853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:58.703 [2024-12-09 23:05:33.899480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:58.703 23:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.703 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:58.703 "name": "raid_bdev1", 00:23:58.703 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:58.703 "strip_size_kb": 0, 00:23:58.703 "state": "online", 00:23:58.703 "raid_level": "raid1", 00:23:58.703 "superblock": true, 00:23:58.703 "num_base_bdevs": 2, 00:23:58.703 "num_base_bdevs_discovered": 2, 00:23:58.703 "num_base_bdevs_operational": 2, 00:23:58.703 "process": { 00:23:58.703 "type": "rebuild", 00:23:58.703 "target": "spare", 00:23:58.703 "progress": { 00:23:58.703 "blocks": 14336, 00:23:58.703 "percent": 22 00:23:58.703 } 00:23:58.703 }, 00:23:58.703 "base_bdevs_list": [ 00:23:58.703 { 00:23:58.703 "name": "spare", 00:23:58.703 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:23:58.703 "is_configured": true, 00:23:58.703 "data_offset": 2048, 00:23:58.703 "data_size": 63488 00:23:58.703 }, 00:23:58.703 { 00:23:58.703 "name": "BaseBdev2", 00:23:58.703 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:58.703 "is_configured": true, 00:23:58.703 "data_offset": 2048, 00:23:58.703 "data_size": 63488 00:23:58.703 } 00:23:58.703 ] 00:23:58.703 }' 00:23:58.703 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:58.703 [2024-12-09 23:05:34.011557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:58.703 162.67 IOPS, 488.00 MiB/s [2024-12-09T23:05:34.066Z] 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.703 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:58.703 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:58.961 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=342 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:58.961 "name": "raid_bdev1", 00:23:58.961 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:23:58.961 "strip_size_kb": 0, 00:23:58.961 "state": "online", 00:23:58.961 "raid_level": "raid1", 00:23:58.961 "superblock": true, 00:23:58.961 "num_base_bdevs": 2, 00:23:58.961 "num_base_bdevs_discovered": 2, 00:23:58.961 "num_base_bdevs_operational": 2, 00:23:58.961 "process": { 00:23:58.961 "type": "rebuild", 00:23:58.961 "target": "spare", 00:23:58.961 "progress": { 00:23:58.961 "blocks": 16384, 00:23:58.961 "percent": 25 00:23:58.961 } 00:23:58.961 }, 00:23:58.961 "base_bdevs_list": [ 00:23:58.961 { 00:23:58.961 "name": "spare", 00:23:58.961 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:23:58.961 "is_configured": true, 00:23:58.961 "data_offset": 2048, 00:23:58.961 "data_size": 63488 00:23:58.961 }, 00:23:58.961 { 00:23:58.961 "name": "BaseBdev2", 00:23:58.961 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:23:58.961 "is_configured": true, 00:23:58.961 "data_offset": 2048, 00:23:58.961 "data_size": 63488 00:23:58.961 } 00:23:58.961 ] 00:23:58.961 }' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.961 23:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:59.220 [2024-12-09 23:05:34.343416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:59.220 [2024-12-09 23:05:34.462580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:59.787 [2024-12-09 23:05:34.911129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:00.048 140.25 IOPS, 420.75 MiB/s [2024-12-09T23:05:35.411Z] 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:00.048 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.048 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:00.048 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:00.048 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:00.048 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:00.048 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:00.049 "name": "raid_bdev1", 00:24:00.049 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:00.049 "strip_size_kb": 0, 00:24:00.049 "state": "online", 00:24:00.049 "raid_level": "raid1", 00:24:00.049 "superblock": true, 00:24:00.049 "num_base_bdevs": 2, 00:24:00.049 "num_base_bdevs_discovered": 2, 00:24:00.049 "num_base_bdevs_operational": 2, 00:24:00.049 "process": { 00:24:00.049 "type": "rebuild", 00:24:00.049 "target": "spare", 00:24:00.049 "progress": { 00:24:00.049 "blocks": 32768, 00:24:00.049 "percent": 51 00:24:00.049 } 00:24:00.049 }, 00:24:00.049 "base_bdevs_list": [ 00:24:00.049 { 00:24:00.049 "name": "spare", 00:24:00.049 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:00.049 "is_configured": true, 00:24:00.049 "data_offset": 2048, 00:24:00.049 "data_size": 63488 00:24:00.049 }, 00:24:00.049 { 00:24:00.049 "name": "BaseBdev2", 00:24:00.049 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:00.049 "is_configured": true, 00:24:00.049 "data_offset": 2048, 00:24:00.049 "data_size": 63488 00:24:00.049 } 00:24:00.049 ] 00:24:00.049 }' 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.049 23:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:00.308 [2024-12-09 23:05:35.451979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:00.566 [2024-12-09 23:05:35.675917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:00.566 [2024-12-09 23:05:35.676271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:00.829 [2024-12-09 23:05:36.003125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:01.088 122.20 IOPS, 366.60 MiB/s [2024-12-09T23:05:36.451Z] 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.088 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:01.088 "name": "raid_bdev1", 00:24:01.088 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:01.088 "strip_size_kb": 0, 00:24:01.088 "state": "online", 00:24:01.088 "raid_level": "raid1", 00:24:01.088 "superblock": true, 00:24:01.088 "num_base_bdevs": 2, 00:24:01.088 "num_base_bdevs_discovered": 2, 00:24:01.088 "num_base_bdevs_operational": 2, 00:24:01.088 "process": { 00:24:01.088 "type": "rebuild", 00:24:01.088 "target": "spare", 00:24:01.088 "progress": { 00:24:01.088 "blocks": 49152, 00:24:01.088 "percent": 77 00:24:01.088 } 00:24:01.088 }, 00:24:01.088 "base_bdevs_list": [ 00:24:01.088 { 00:24:01.088 "name": "spare", 00:24:01.088 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:01.088 "is_configured": true, 00:24:01.089 "data_offset": 2048, 00:24:01.089 "data_size": 63488 00:24:01.089 }, 00:24:01.089 { 00:24:01.089 "name": "BaseBdev2", 00:24:01.089 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:01.089 "is_configured": true, 00:24:01.089 "data_offset": 2048, 00:24:01.089 "data_size": 63488 00:24:01.089 } 00:24:01.089 ] 00:24:01.089 }' 00:24:01.089 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:01.089 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.089 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:01.089 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:01.089 23:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:01.346 [2024-12-09 23:05:36.459072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:01.912 108.50 IOPS, 325.50 MiB/s [2024-12-09T23:05:37.275Z] [2024-12-09 23:05:37.110709] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:01.912 [2024-12-09 23:05:37.215763] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:01.912 [2024-12-09 23:05:37.217438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:02.170 "name": "raid_bdev1", 00:24:02.170 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:02.170 "strip_size_kb": 0, 00:24:02.170 "state": "online", 00:24:02.170 "raid_level": "raid1", 00:24:02.170 "superblock": true, 00:24:02.170 "num_base_bdevs": 2, 00:24:02.170 "num_base_bdevs_discovered": 2, 00:24:02.170 "num_base_bdevs_operational": 2, 00:24:02.170 "base_bdevs_list": [ 00:24:02.170 { 00:24:02.170 "name": "spare", 00:24:02.170 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:02.170 "is_configured": true, 00:24:02.170 "data_offset": 2048, 00:24:02.170 "data_size": 63488 00:24:02.170 }, 00:24:02.170 { 00:24:02.170 "name": "BaseBdev2", 00:24:02.170 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:02.170 "is_configured": true, 00:24:02.170 "data_offset": 2048, 00:24:02.170 "data_size": 63488 00:24:02.170 } 00:24:02.170 ] 00:24:02.170 }' 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:02.170 "name": "raid_bdev1", 00:24:02.170 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:02.170 "strip_size_kb": 0, 00:24:02.170 "state": "online", 00:24:02.170 "raid_level": "raid1", 00:24:02.170 "superblock": true, 00:24:02.170 "num_base_bdevs": 2, 00:24:02.170 "num_base_bdevs_discovered": 2, 00:24:02.170 "num_base_bdevs_operational": 2, 00:24:02.170 "base_bdevs_list": [ 00:24:02.170 { 00:24:02.170 "name": "spare", 00:24:02.170 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:02.170 "is_configured": true, 00:24:02.170 "data_offset": 2048, 00:24:02.170 "data_size": 63488 00:24:02.170 }, 00:24:02.170 { 00:24:02.170 "name": "BaseBdev2", 00:24:02.170 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:02.170 "is_configured": true, 00:24:02.170 "data_offset": 2048, 00:24:02.170 "data_size": 63488 00:24:02.170 } 00:24:02.170 ] 00:24:02.170 }' 00:24:02.170 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.492 "name": "raid_bdev1", 00:24:02.492 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:02.492 "strip_size_kb": 0, 00:24:02.492 "state": "online", 00:24:02.492 "raid_level": "raid1", 00:24:02.492 "superblock": true, 00:24:02.492 "num_base_bdevs": 2, 00:24:02.492 "num_base_bdevs_discovered": 2, 00:24:02.492 "num_base_bdevs_operational": 2, 00:24:02.492 "base_bdevs_list": [ 00:24:02.492 { 00:24:02.492 "name": "spare", 00:24:02.492 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:02.492 "is_configured": true, 00:24:02.492 "data_offset": 2048, 00:24:02.492 "data_size": 63488 00:24:02.492 }, 00:24:02.492 { 00:24:02.492 "name": "BaseBdev2", 00:24:02.492 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:02.492 "is_configured": true, 00:24:02.492 "data_offset": 2048, 00:24:02.492 "data_size": 63488 00:24:02.492 } 00:24:02.492 ] 00:24:02.492 }' 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.492 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.750 [2024-12-09 23:05:37.900141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:02.750 [2024-12-09 23:05:37.900164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:02.750 00:24:02.750 Latency(us) 00:24:02.750 [2024-12-09T23:05:38.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.750 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:02.750 raid_bdev1 : 6.96 97.02 291.07 0.00 0.00 14568.82 274.12 112923.57 00:24:02.750 [2024-12-09T23:05:38.113Z] =================================================================================================================== 00:24:02.750 [2024-12-09T23:05:38.113Z] Total : 97.02 291.07 0.00 0.00 14568.82 274.12 112923.57 00:24:02.750 { 00:24:02.750 "results": [ 00:24:02.750 { 00:24:02.750 "job": "raid_bdev1", 00:24:02.750 "core_mask": "0x1", 00:24:02.750 "workload": "randrw", 00:24:02.750 "percentage": 50, 00:24:02.750 "status": "finished", 00:24:02.750 "queue_depth": 2, 00:24:02.750 "io_size": 3145728, 00:24:02.750 "runtime": 6.957091, 00:24:02.750 "iops": 97.02331046122582, 00:24:02.750 "mibps": 291.06993138367744, 00:24:02.750 "io_failed": 0, 00:24:02.750 "io_timeout": 0, 00:24:02.750 "avg_latency_us": 14568.819674074075, 00:24:02.750 "min_latency_us": 274.11692307692306, 00:24:02.750 "max_latency_us": 112923.56923076924 00:24:02.750 } 00:24:02.750 ], 00:24:02.750 "core_count": 1 00:24:02.750 } 00:24:02.750 [2024-12-09 23:05:37.988283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.750 [2024-12-09 23:05:37.988339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.750 [2024-12-09 23:05:37.988403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.750 [2024-12-09 23:05:37.988414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:02.750 23:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:02.750 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:03.009 /dev/nbd0 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:03.009 1+0 records in 00:24:03.009 1+0 records out 00:24:03.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325469 s, 12.6 MB/s 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.009 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:03.265 /dev/nbd1 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:03.265 1+0 records in 00:24:03.265 1+0 records out 00:24:03.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329486 s, 12.4 MB/s 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.265 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.535 23:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:03.794 [2024-12-09 23:05:39.060884] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:03.794 [2024-12-09 23:05:39.060939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.794 [2024-12-09 23:05:39.060961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:03.794 [2024-12-09 23:05:39.060973] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.794 [2024-12-09 23:05:39.063204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.794 [2024-12-09 23:05:39.063242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:03.794 [2024-12-09 23:05:39.063333] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:03.794 [2024-12-09 23:05:39.063377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:03.794 [2024-12-09 23:05:39.063499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.794 spare 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.794 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.052 [2024-12-09 23:05:39.163615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:04.052 [2024-12-09 23:05:39.163660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:04.052 [2024-12-09 23:05:39.163985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:24:04.052 [2024-12-09 23:05:39.164193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:04.052 [2024-12-09 23:05:39.164213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:04.052 [2024-12-09 23:05:39.164388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:04.052 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.053 "name": "raid_bdev1", 00:24:04.053 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:04.053 "strip_size_kb": 0, 00:24:04.053 "state": "online", 00:24:04.053 "raid_level": "raid1", 00:24:04.053 "superblock": true, 00:24:04.053 "num_base_bdevs": 2, 00:24:04.053 "num_base_bdevs_discovered": 2, 00:24:04.053 "num_base_bdevs_operational": 2, 00:24:04.053 "base_bdevs_list": [ 00:24:04.053 { 00:24:04.053 "name": "spare", 00:24:04.053 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:04.053 "is_configured": true, 00:24:04.053 "data_offset": 2048, 00:24:04.053 "data_size": 63488 00:24:04.053 }, 00:24:04.053 { 00:24:04.053 "name": "BaseBdev2", 00:24:04.053 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:04.053 "is_configured": true, 00:24:04.053 "data_offset": 2048, 00:24:04.053 "data_size": 63488 00:24:04.053 } 00:24:04.053 ] 00:24:04.053 }' 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.053 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.311 "name": "raid_bdev1", 00:24:04.311 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:04.311 "strip_size_kb": 0, 00:24:04.311 "state": "online", 00:24:04.311 "raid_level": "raid1", 00:24:04.311 "superblock": true, 00:24:04.311 "num_base_bdevs": 2, 00:24:04.311 "num_base_bdevs_discovered": 2, 00:24:04.311 "num_base_bdevs_operational": 2, 00:24:04.311 "base_bdevs_list": [ 00:24:04.311 { 00:24:04.311 "name": "spare", 00:24:04.311 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 2048, 00:24:04.311 "data_size": 63488 00:24:04.311 }, 00:24:04.311 { 00:24:04.311 "name": "BaseBdev2", 00:24:04.311 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 2048, 00:24:04.311 "data_size": 63488 00:24:04.311 } 00:24:04.311 ] 00:24:04.311 }' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 [2024-12-09 23:05:39.605130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.311 "name": "raid_bdev1", 00:24:04.311 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:04.311 "strip_size_kb": 0, 00:24:04.311 "state": "online", 00:24:04.311 "raid_level": "raid1", 00:24:04.311 "superblock": true, 00:24:04.311 "num_base_bdevs": 2, 00:24:04.311 "num_base_bdevs_discovered": 1, 00:24:04.311 "num_base_bdevs_operational": 1, 00:24:04.311 "base_bdevs_list": [ 00:24:04.311 { 00:24:04.311 "name": null, 00:24:04.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.311 "is_configured": false, 00:24:04.311 "data_offset": 0, 00:24:04.311 "data_size": 63488 00:24:04.311 }, 00:24:04.311 { 00:24:04.311 "name": "BaseBdev2", 00:24:04.311 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 2048, 00:24:04.311 "data_size": 63488 00:24:04.311 } 00:24:04.311 ] 00:24:04.311 }' 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.311 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.569 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.569 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.569 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:04.569 [2024-12-09 23:05:39.913328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.569 [2024-12-09 23:05:39.913586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:04.569 [2024-12-09 23:05:39.913620] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:04.569 [2024-12-09 23:05:39.913668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.569 [2024-12-09 23:05:39.925168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:24:04.569 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.570 23:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:04.570 [2024-12-09 23:05:39.927446] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.944 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.944 "name": "raid_bdev1", 00:24:05.945 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:05.945 "strip_size_kb": 0, 00:24:05.945 "state": "online", 00:24:05.945 "raid_level": "raid1", 00:24:05.945 "superblock": true, 00:24:05.945 "num_base_bdevs": 2, 00:24:05.945 "num_base_bdevs_discovered": 2, 00:24:05.945 "num_base_bdevs_operational": 2, 00:24:05.945 "process": { 00:24:05.945 "type": "rebuild", 00:24:05.945 "target": "spare", 00:24:05.945 "progress": { 00:24:05.945 "blocks": 20480, 00:24:05.945 "percent": 32 00:24:05.945 } 00:24:05.945 }, 00:24:05.945 "base_bdevs_list": [ 00:24:05.945 { 00:24:05.945 "name": "spare", 00:24:05.945 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:05.945 "is_configured": true, 00:24:05.945 "data_offset": 2048, 00:24:05.945 "data_size": 63488 00:24:05.945 }, 00:24:05.945 { 00:24:05.945 "name": "BaseBdev2", 00:24:05.945 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:05.945 "is_configured": true, 00:24:05.945 "data_offset": 2048, 00:24:05.945 "data_size": 63488 00:24:05.945 } 00:24:05.945 ] 00:24:05.945 }' 00:24:05.945 23:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:05.945 [2024-12-09 23:05:41.049436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:05.945 [2024-12-09 23:05:41.133788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:05.945 [2024-12-09 23:05:41.133845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.945 [2024-12-09 23:05:41.133859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:05.945 [2024-12-09 23:05:41.133865] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.945 "name": "raid_bdev1", 00:24:05.945 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:05.945 "strip_size_kb": 0, 00:24:05.945 "state": "online", 00:24:05.945 "raid_level": "raid1", 00:24:05.945 "superblock": true, 00:24:05.945 "num_base_bdevs": 2, 00:24:05.945 "num_base_bdevs_discovered": 1, 00:24:05.945 "num_base_bdevs_operational": 1, 00:24:05.945 "base_bdevs_list": [ 00:24:05.945 { 00:24:05.945 "name": null, 00:24:05.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.945 "is_configured": false, 00:24:05.945 "data_offset": 0, 00:24:05.945 "data_size": 63488 00:24:05.945 }, 00:24:05.945 { 00:24:05.945 "name": "BaseBdev2", 00:24:05.945 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:05.945 "is_configured": true, 00:24:05.945 "data_offset": 2048, 00:24:05.945 "data_size": 63488 00:24:05.945 } 00:24:05.945 ] 00:24:05.945 }' 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.945 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:06.212 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:06.212 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.212 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:06.212 [2024-12-09 23:05:41.469888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:06.212 [2024-12-09 23:05:41.469945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.212 [2024-12-09 23:05:41.469966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:06.212 [2024-12-09 23:05:41.469975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.212 [2024-12-09 23:05:41.470368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.212 [2024-12-09 23:05:41.470389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:06.212 [2024-12-09 23:05:41.470470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:06.212 [2024-12-09 23:05:41.470480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:06.212 [2024-12-09 23:05:41.470490] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:06.212 [2024-12-09 23:05:41.470507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:06.212 [2024-12-09 23:05:41.479821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:24:06.212 spare 00:24:06.212 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.212 23:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:06.212 [2024-12-09 23:05:41.481431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.204 "name": "raid_bdev1", 00:24:07.204 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:07.204 "strip_size_kb": 0, 00:24:07.204 "state": "online", 00:24:07.204 "raid_level": "raid1", 00:24:07.204 "superblock": true, 00:24:07.204 "num_base_bdevs": 2, 00:24:07.204 "num_base_bdevs_discovered": 2, 00:24:07.204 "num_base_bdevs_operational": 2, 00:24:07.204 "process": { 00:24:07.204 "type": "rebuild", 00:24:07.204 "target": "spare", 00:24:07.204 "progress": { 00:24:07.204 "blocks": 20480, 00:24:07.204 "percent": 32 00:24:07.204 } 00:24:07.204 }, 00:24:07.204 "base_bdevs_list": [ 00:24:07.204 { 00:24:07.204 "name": "spare", 00:24:07.204 "uuid": "f5a8687f-47d6-5c1b-82d5-cfabad0af7f3", 00:24:07.204 "is_configured": true, 00:24:07.204 "data_offset": 2048, 00:24:07.204 "data_size": 63488 00:24:07.204 }, 00:24:07.204 { 00:24:07.204 "name": "BaseBdev2", 00:24:07.204 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:07.204 "is_configured": true, 00:24:07.204 "data_offset": 2048, 00:24:07.204 "data_size": 63488 00:24:07.204 } 00:24:07.204 ] 00:24:07.204 }' 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.204 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.462 [2024-12-09 23:05:42.571958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.462 [2024-12-09 23:05:42.586687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:07.462 [2024-12-09 23:05:42.586745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.462 [2024-12-09 23:05:42.586757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.462 [2024-12-09 23:05:42.586765] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.462 "name": "raid_bdev1", 00:24:07.462 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:07.462 "strip_size_kb": 0, 00:24:07.462 "state": "online", 00:24:07.462 "raid_level": "raid1", 00:24:07.462 "superblock": true, 00:24:07.462 "num_base_bdevs": 2, 00:24:07.462 "num_base_bdevs_discovered": 1, 00:24:07.462 "num_base_bdevs_operational": 1, 00:24:07.462 "base_bdevs_list": [ 00:24:07.462 { 00:24:07.462 "name": null, 00:24:07.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.462 "is_configured": false, 00:24:07.462 "data_offset": 0, 00:24:07.462 "data_size": 63488 00:24:07.462 }, 00:24:07.462 { 00:24:07.462 "name": "BaseBdev2", 00:24:07.462 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:07.462 "is_configured": true, 00:24:07.462 "data_offset": 2048, 00:24:07.462 "data_size": 63488 00:24:07.462 } 00:24:07.462 ] 00:24:07.462 }' 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.462 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.722 "name": "raid_bdev1", 00:24:07.722 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:07.722 "strip_size_kb": 0, 00:24:07.722 "state": "online", 00:24:07.722 "raid_level": "raid1", 00:24:07.722 "superblock": true, 00:24:07.722 "num_base_bdevs": 2, 00:24:07.722 "num_base_bdevs_discovered": 1, 00:24:07.722 "num_base_bdevs_operational": 1, 00:24:07.722 "base_bdevs_list": [ 00:24:07.722 { 00:24:07.722 "name": null, 00:24:07.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.722 "is_configured": false, 00:24:07.722 "data_offset": 0, 00:24:07.722 "data_size": 63488 00:24:07.722 }, 00:24:07.722 { 00:24:07.722 "name": "BaseBdev2", 00:24:07.722 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:07.722 "is_configured": true, 00:24:07.722 "data_offset": 2048, 00:24:07.722 "data_size": 63488 00:24:07.722 } 00:24:07.722 ] 00:24:07.722 }' 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:07.722 23:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.722 [2024-12-09 23:05:43.015361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:07.722 [2024-12-09 23:05:43.015410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.722 [2024-12-09 23:05:43.015430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:07.722 [2024-12-09 23:05:43.015441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.722 [2024-12-09 23:05:43.015788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.722 [2024-12-09 23:05:43.015809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:07.722 [2024-12-09 23:05:43.015868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:07.722 [2024-12-09 23:05:43.015882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:07.722 [2024-12-09 23:05:43.015889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:07.722 [2024-12-09 23:05:43.015898] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:07.722 BaseBdev1 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.722 23:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:09.097 "name": "raid_bdev1", 00:24:09.097 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:09.097 "strip_size_kb": 0, 00:24:09.097 "state": "online", 00:24:09.097 "raid_level": "raid1", 00:24:09.097 "superblock": true, 00:24:09.097 "num_base_bdevs": 2, 00:24:09.097 "num_base_bdevs_discovered": 1, 00:24:09.097 "num_base_bdevs_operational": 1, 00:24:09.097 "base_bdevs_list": [ 00:24:09.097 { 00:24:09.097 "name": null, 00:24:09.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.097 "is_configured": false, 00:24:09.097 "data_offset": 0, 00:24:09.097 "data_size": 63488 00:24:09.097 }, 00:24:09.097 { 00:24:09.097 "name": "BaseBdev2", 00:24:09.097 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:09.097 "is_configured": true, 00:24:09.097 "data_offset": 2048, 00:24:09.097 "data_size": 63488 00:24:09.097 } 00:24:09.097 ] 00:24:09.097 }' 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.097 "name": "raid_bdev1", 00:24:09.097 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:09.097 "strip_size_kb": 0, 00:24:09.097 "state": "online", 00:24:09.097 "raid_level": "raid1", 00:24:09.097 "superblock": true, 00:24:09.097 "num_base_bdevs": 2, 00:24:09.097 "num_base_bdevs_discovered": 1, 00:24:09.097 "num_base_bdevs_operational": 1, 00:24:09.097 "base_bdevs_list": [ 00:24:09.097 { 00:24:09.097 "name": null, 00:24:09.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.097 "is_configured": false, 00:24:09.097 "data_offset": 0, 00:24:09.097 "data_size": 63488 00:24:09.097 }, 00:24:09.097 { 00:24:09.097 "name": "BaseBdev2", 00:24:09.097 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:09.097 "is_configured": true, 00:24:09.097 "data_offset": 2048, 00:24:09.097 "data_size": 63488 00:24:09.097 } 00:24:09.097 ] 00:24:09.097 }' 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:09.097 [2024-12-09 23:05:44.435835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:09.097 [2024-12-09 23:05:44.435967] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:09.097 [2024-12-09 23:05:44.435980] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:09.097 request: 00:24:09.097 { 00:24:09.097 "base_bdev": "BaseBdev1", 00:24:09.097 "raid_bdev": "raid_bdev1", 00:24:09.097 "method": "bdev_raid_add_base_bdev", 00:24:09.097 "req_id": 1 00:24:09.097 } 00:24:09.097 Got JSON-RPC error response 00:24:09.097 response: 00:24:09.097 { 00:24:09.097 "code": -22, 00:24:09.097 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:09.097 } 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.097 23:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.470 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.470 "name": "raid_bdev1", 00:24:10.470 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:10.470 "strip_size_kb": 0, 00:24:10.470 "state": "online", 00:24:10.470 "raid_level": "raid1", 00:24:10.470 "superblock": true, 00:24:10.470 "num_base_bdevs": 2, 00:24:10.470 "num_base_bdevs_discovered": 1, 00:24:10.470 "num_base_bdevs_operational": 1, 00:24:10.470 "base_bdevs_list": [ 00:24:10.470 { 00:24:10.470 "name": null, 00:24:10.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.470 "is_configured": false, 00:24:10.470 "data_offset": 0, 00:24:10.470 "data_size": 63488 00:24:10.470 }, 00:24:10.470 { 00:24:10.470 "name": "BaseBdev2", 00:24:10.470 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:10.470 "is_configured": true, 00:24:10.470 "data_offset": 2048, 00:24:10.471 "data_size": 63488 00:24:10.471 } 00:24:10.471 ] 00:24:10.471 }' 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.471 "name": "raid_bdev1", 00:24:10.471 "uuid": "93e99f0e-0afc-4871-818f-b89163ab5f02", 00:24:10.471 "strip_size_kb": 0, 00:24:10.471 "state": "online", 00:24:10.471 "raid_level": "raid1", 00:24:10.471 "superblock": true, 00:24:10.471 "num_base_bdevs": 2, 00:24:10.471 "num_base_bdevs_discovered": 1, 00:24:10.471 "num_base_bdevs_operational": 1, 00:24:10.471 "base_bdevs_list": [ 00:24:10.471 { 00:24:10.471 "name": null, 00:24:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.471 "is_configured": false, 00:24:10.471 "data_offset": 0, 00:24:10.471 "data_size": 63488 00:24:10.471 }, 00:24:10.471 { 00:24:10.471 "name": "BaseBdev2", 00:24:10.471 "uuid": "9329e08d-f47d-5c77-84b8-38db69404727", 00:24:10.471 "is_configured": true, 00:24:10.471 "data_offset": 2048, 00:24:10.471 "data_size": 63488 00:24:10.471 } 00:24:10.471 ] 00:24:10.471 }' 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.471 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74798 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 74798 ']' 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 74798 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74798 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:10.728 killing process with pid 74798 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74798' 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 74798 00:24:10.728 Received shutdown signal, test time was about 14.867159 seconds 00:24:10.728 00:24:10.728 Latency(us) 00:24:10.728 [2024-12-09T23:05:46.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.728 [2024-12-09T23:05:46.091Z] =================================================================================================================== 00:24:10.728 [2024-12-09T23:05:46.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.728 [2024-12-09 23:05:45.886172] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:10.728 23:05:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 74798 00:24:10.728 [2024-12-09 23:05:45.886278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.728 [2024-12-09 23:05:45.886325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:10.728 [2024-12-09 23:05:45.886339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:10.728 [2024-12-09 23:05:45.997039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:24:11.298 00:24:11.298 real 0m17.079s 00:24:11.298 user 0m21.773s 00:24:11.298 sys 0m1.409s 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:11.298 ************************************ 00:24:11.298 END TEST raid_rebuild_test_sb_io 00:24:11.298 ************************************ 00:24:11.298 23:05:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:24:11.298 23:05:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:24:11.298 23:05:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:11.298 23:05:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.298 23:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:11.298 ************************************ 00:24:11.298 START TEST raid_rebuild_test 00:24:11.298 ************************************ 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.298 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75459 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75459 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75459 ']' 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.299 23:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:11.556 Zero copy mechanism will not be used. 00:24:11.556 [2024-12-09 23:05:46.690295] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:24:11.556 [2024-12-09 23:05:46.690392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75459 ] 00:24:11.556 [2024-12-09 23:05:46.839475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.813 [2024-12-09 23:05:46.925202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.813 [2024-12-09 23:05:47.035202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.813 [2024-12-09 23:05:47.035230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.380 BaseBdev1_malloc 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.380 [2024-12-09 23:05:47.520734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:12.380 [2024-12-09 23:05:47.520786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.380 [2024-12-09 23:05:47.520805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:12.380 [2024-12-09 23:05:47.520815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.380 [2024-12-09 23:05:47.522609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.380 [2024-12-09 23:05:47.522645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:12.380 BaseBdev1 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.380 BaseBdev2_malloc 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.380 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 [2024-12-09 23:05:47.552002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:12.381 [2024-12-09 23:05:47.552048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.381 [2024-12-09 23:05:47.552066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:12.381 [2024-12-09 23:05:47.552075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.381 [2024-12-09 23:05:47.553794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.381 [2024-12-09 23:05:47.553828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.381 BaseBdev2 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 BaseBdev3_malloc 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 [2024-12-09 23:05:47.597204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:12.381 [2024-12-09 23:05:47.597256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.381 [2024-12-09 23:05:47.597274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:12.381 [2024-12-09 23:05:47.597283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.381 [2024-12-09 23:05:47.599017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.381 [2024-12-09 23:05:47.599051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.381 BaseBdev3 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 BaseBdev4_malloc 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 [2024-12-09 23:05:47.628781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:12.381 [2024-12-09 23:05:47.628825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.381 [2024-12-09 23:05:47.628839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:12.381 [2024-12-09 23:05:47.628849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.381 [2024-12-09 23:05:47.630564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.381 [2024-12-09 23:05:47.630597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:12.381 BaseBdev4 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 spare_malloc 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 spare_delay 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 [2024-12-09 23:05:47.668452] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:12.381 [2024-12-09 23:05:47.668491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.381 [2024-12-09 23:05:47.668509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:12.381 [2024-12-09 23:05:47.668518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.381 [2024-12-09 23:05:47.670249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.381 [2024-12-09 23:05:47.670282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:12.381 spare 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 [2024-12-09 23:05:47.676490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:12.381 [2024-12-09 23:05:47.677947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.381 [2024-12-09 23:05:47.678001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:12.381 [2024-12-09 23:05:47.678041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:12.381 [2024-12-09 23:05:47.678115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:12.381 [2024-12-09 23:05:47.678126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:12.381 [2024-12-09 23:05:47.678331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:12.381 [2024-12-09 23:05:47.678456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:12.381 [2024-12-09 23:05:47.678470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:12.381 [2024-12-09 23:05:47.678575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.381 "name": "raid_bdev1", 00:24:12.381 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:12.381 "strip_size_kb": 0, 00:24:12.381 "state": "online", 00:24:12.381 "raid_level": "raid1", 00:24:12.381 "superblock": false, 00:24:12.381 "num_base_bdevs": 4, 00:24:12.381 "num_base_bdevs_discovered": 4, 00:24:12.381 "num_base_bdevs_operational": 4, 00:24:12.381 "base_bdevs_list": [ 00:24:12.381 { 00:24:12.381 "name": "BaseBdev1", 00:24:12.381 "uuid": "ea2fd0e1-8373-5a8c-b0b2-c43e72e393b5", 00:24:12.381 "is_configured": true, 00:24:12.381 "data_offset": 0, 00:24:12.381 "data_size": 65536 00:24:12.381 }, 00:24:12.381 { 00:24:12.381 "name": "BaseBdev2", 00:24:12.381 "uuid": "d90b9e19-1a40-54ce-afde-f058a2ada751", 00:24:12.381 "is_configured": true, 00:24:12.381 "data_offset": 0, 00:24:12.381 "data_size": 65536 00:24:12.381 }, 00:24:12.381 { 00:24:12.381 "name": "BaseBdev3", 00:24:12.381 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:12.381 "is_configured": true, 00:24:12.381 "data_offset": 0, 00:24:12.381 "data_size": 65536 00:24:12.381 }, 00:24:12.381 { 00:24:12.381 "name": "BaseBdev4", 00:24:12.381 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:12.381 "is_configured": true, 00:24:12.381 "data_offset": 0, 00:24:12.381 "data_size": 65536 00:24:12.381 } 00:24:12.381 ] 00:24:12.381 }' 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.381 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.639 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:12.639 23:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:12.639 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.639 23:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.966 [2024-12-09 23:05:48.004858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:12.966 [2024-12-09 23:05:48.252646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:12.966 /dev/nbd0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.966 1+0 records in 00:24:12.966 1+0 records out 00:24:12.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261646 s, 15.7 MB/s 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:12.966 23:05:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:19.524 65536+0 records in 00:24:19.524 65536+0 records out 00:24:19.524 33554432 bytes (34 MB, 32 MiB) copied, 5.46794 s, 6.1 MB/s 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:19.524 [2024-12-09 23:05:53.964756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.524 [2024-12-09 23:05:53.988905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.524 23:05:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.524 "name": "raid_bdev1", 00:24:19.524 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:19.524 "strip_size_kb": 0, 00:24:19.524 "state": "online", 00:24:19.524 "raid_level": "raid1", 00:24:19.524 "superblock": false, 00:24:19.524 "num_base_bdevs": 4, 00:24:19.524 "num_base_bdevs_discovered": 3, 00:24:19.524 "num_base_bdevs_operational": 3, 00:24:19.524 "base_bdevs_list": [ 00:24:19.524 { 00:24:19.524 "name": null, 00:24:19.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.524 "is_configured": false, 00:24:19.524 "data_offset": 0, 00:24:19.524 "data_size": 65536 00:24:19.524 }, 00:24:19.524 { 00:24:19.524 "name": "BaseBdev2", 00:24:19.524 "uuid": "d90b9e19-1a40-54ce-afde-f058a2ada751", 00:24:19.524 "is_configured": true, 00:24:19.524 "data_offset": 0, 00:24:19.524 "data_size": 65536 00:24:19.524 }, 00:24:19.524 { 00:24:19.524 "name": "BaseBdev3", 00:24:19.524 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:19.524 "is_configured": true, 00:24:19.524 "data_offset": 0, 00:24:19.524 "data_size": 65536 00:24:19.524 }, 00:24:19.524 { 00:24:19.524 "name": "BaseBdev4", 00:24:19.524 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:19.524 "is_configured": true, 00:24:19.524 "data_offset": 0, 00:24:19.524 "data_size": 65536 00:24:19.524 } 00:24:19.524 ] 00:24:19.524 }' 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.524 [2024-12-09 23:05:54.296952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:19.524 [2024-12-09 23:05:54.305188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.524 23:05:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:19.524 [2024-12-09 23:05:54.306762] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.089 "name": "raid_bdev1", 00:24:20.089 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:20.089 "strip_size_kb": 0, 00:24:20.089 "state": "online", 00:24:20.089 "raid_level": "raid1", 00:24:20.089 "superblock": false, 00:24:20.089 "num_base_bdevs": 4, 00:24:20.089 "num_base_bdevs_discovered": 4, 00:24:20.089 "num_base_bdevs_operational": 4, 00:24:20.089 "process": { 00:24:20.089 "type": "rebuild", 00:24:20.089 "target": "spare", 00:24:20.089 "progress": { 00:24:20.089 "blocks": 20480, 00:24:20.089 "percent": 31 00:24:20.089 } 00:24:20.089 }, 00:24:20.089 "base_bdevs_list": [ 00:24:20.089 { 00:24:20.089 "name": "spare", 00:24:20.089 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:20.089 "is_configured": true, 00:24:20.089 "data_offset": 0, 00:24:20.089 "data_size": 65536 00:24:20.089 }, 00:24:20.089 { 00:24:20.089 "name": "BaseBdev2", 00:24:20.089 "uuid": "d90b9e19-1a40-54ce-afde-f058a2ada751", 00:24:20.089 "is_configured": true, 00:24:20.089 "data_offset": 0, 00:24:20.089 "data_size": 65536 00:24:20.089 }, 00:24:20.089 { 00:24:20.089 "name": "BaseBdev3", 00:24:20.089 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:20.089 "is_configured": true, 00:24:20.089 "data_offset": 0, 00:24:20.089 "data_size": 65536 00:24:20.089 }, 00:24:20.089 { 00:24:20.089 "name": "BaseBdev4", 00:24:20.089 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:20.089 "is_configured": true, 00:24:20.089 "data_offset": 0, 00:24:20.089 "data_size": 65536 00:24:20.089 } 00:24:20.089 ] 00:24:20.089 }' 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.089 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.089 [2024-12-09 23:05:55.417049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:20.346 [2024-12-09 23:05:55.512302] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:20.346 [2024-12-09 23:05:55.512373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.346 [2024-12-09 23:05:55.512387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:20.346 [2024-12-09 23:05:55.512395] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.346 "name": "raid_bdev1", 00:24:20.346 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:20.346 "strip_size_kb": 0, 00:24:20.346 "state": "online", 00:24:20.346 "raid_level": "raid1", 00:24:20.346 "superblock": false, 00:24:20.346 "num_base_bdevs": 4, 00:24:20.346 "num_base_bdevs_discovered": 3, 00:24:20.346 "num_base_bdevs_operational": 3, 00:24:20.346 "base_bdevs_list": [ 00:24:20.346 { 00:24:20.346 "name": null, 00:24:20.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.346 "is_configured": false, 00:24:20.346 "data_offset": 0, 00:24:20.346 "data_size": 65536 00:24:20.346 }, 00:24:20.346 { 00:24:20.346 "name": "BaseBdev2", 00:24:20.346 "uuid": "d90b9e19-1a40-54ce-afde-f058a2ada751", 00:24:20.346 "is_configured": true, 00:24:20.346 "data_offset": 0, 00:24:20.346 "data_size": 65536 00:24:20.346 }, 00:24:20.346 { 00:24:20.346 "name": "BaseBdev3", 00:24:20.346 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:20.346 "is_configured": true, 00:24:20.346 "data_offset": 0, 00:24:20.346 "data_size": 65536 00:24:20.346 }, 00:24:20.346 { 00:24:20.346 "name": "BaseBdev4", 00:24:20.346 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:20.346 "is_configured": true, 00:24:20.346 "data_offset": 0, 00:24:20.346 "data_size": 65536 00:24:20.346 } 00:24:20.346 ] 00:24:20.346 }' 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.346 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.603 "name": "raid_bdev1", 00:24:20.603 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:20.603 "strip_size_kb": 0, 00:24:20.603 "state": "online", 00:24:20.603 "raid_level": "raid1", 00:24:20.603 "superblock": false, 00:24:20.603 "num_base_bdevs": 4, 00:24:20.603 "num_base_bdevs_discovered": 3, 00:24:20.603 "num_base_bdevs_operational": 3, 00:24:20.603 "base_bdevs_list": [ 00:24:20.603 { 00:24:20.603 "name": null, 00:24:20.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.603 "is_configured": false, 00:24:20.603 "data_offset": 0, 00:24:20.603 "data_size": 65536 00:24:20.603 }, 00:24:20.603 { 00:24:20.603 "name": "BaseBdev2", 00:24:20.603 "uuid": "d90b9e19-1a40-54ce-afde-f058a2ada751", 00:24:20.603 "is_configured": true, 00:24:20.603 "data_offset": 0, 00:24:20.603 "data_size": 65536 00:24:20.603 }, 00:24:20.603 { 00:24:20.603 "name": "BaseBdev3", 00:24:20.603 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:20.603 "is_configured": true, 00:24:20.603 "data_offset": 0, 00:24:20.603 "data_size": 65536 00:24:20.603 }, 00:24:20.603 { 00:24:20.603 "name": "BaseBdev4", 00:24:20.603 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:20.603 "is_configured": true, 00:24:20.603 "data_offset": 0, 00:24:20.603 "data_size": 65536 00:24:20.603 } 00:24:20.603 ] 00:24:20.603 }' 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.603 [2024-12-09 23:05:55.912636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.603 [2024-12-09 23:05:55.920449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.603 23:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:20.603 [2024-12-09 23:05:55.922038] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.795 "name": "raid_bdev1", 00:24:21.795 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:21.795 "strip_size_kb": 0, 00:24:21.795 "state": "online", 00:24:21.795 "raid_level": "raid1", 00:24:21.795 "superblock": false, 00:24:21.795 "num_base_bdevs": 4, 00:24:21.795 "num_base_bdevs_discovered": 4, 00:24:21.795 "num_base_bdevs_operational": 4, 00:24:21.795 "process": { 00:24:21.795 "type": "rebuild", 00:24:21.795 "target": "spare", 00:24:21.795 "progress": { 00:24:21.795 "blocks": 20480, 00:24:21.795 "percent": 31 00:24:21.795 } 00:24:21.795 }, 00:24:21.795 "base_bdevs_list": [ 00:24:21.795 { 00:24:21.795 "name": "spare", 00:24:21.795 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:21.795 "is_configured": true, 00:24:21.795 "data_offset": 0, 00:24:21.795 "data_size": 65536 00:24:21.795 }, 00:24:21.795 { 00:24:21.795 "name": "BaseBdev2", 00:24:21.795 "uuid": "d90b9e19-1a40-54ce-afde-f058a2ada751", 00:24:21.795 "is_configured": true, 00:24:21.795 "data_offset": 0, 00:24:21.795 "data_size": 65536 00:24:21.795 }, 00:24:21.795 { 00:24:21.795 "name": "BaseBdev3", 00:24:21.795 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:21.795 "is_configured": true, 00:24:21.795 "data_offset": 0, 00:24:21.795 "data_size": 65536 00:24:21.795 }, 00:24:21.795 { 00:24:21.795 "name": "BaseBdev4", 00:24:21.795 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:21.795 "is_configured": true, 00:24:21.795 "data_offset": 0, 00:24:21.795 "data_size": 65536 00:24:21.795 } 00:24:21.795 ] 00:24:21.795 }' 00:24:21.795 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.070 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.070 23:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.070 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.070 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:22.070 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:22.070 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:22.070 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.071 [2024-12-09 23:05:57.028347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:22.071 [2024-12-09 23:05:57.127652] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.071 "name": "raid_bdev1", 00:24:22.071 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:22.071 "strip_size_kb": 0, 00:24:22.071 "state": "online", 00:24:22.071 "raid_level": "raid1", 00:24:22.071 "superblock": false, 00:24:22.071 "num_base_bdevs": 4, 00:24:22.071 "num_base_bdevs_discovered": 3, 00:24:22.071 "num_base_bdevs_operational": 3, 00:24:22.071 "process": { 00:24:22.071 "type": "rebuild", 00:24:22.071 "target": "spare", 00:24:22.071 "progress": { 00:24:22.071 "blocks": 24576, 00:24:22.071 "percent": 37 00:24:22.071 } 00:24:22.071 }, 00:24:22.071 "base_bdevs_list": [ 00:24:22.071 { 00:24:22.071 "name": "spare", 00:24:22.071 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:22.071 "is_configured": true, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 }, 00:24:22.071 { 00:24:22.071 "name": null, 00:24:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.071 "is_configured": false, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 }, 00:24:22.071 { 00:24:22.071 "name": "BaseBdev3", 00:24:22.071 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:22.071 "is_configured": true, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 }, 00:24:22.071 { 00:24:22.071 "name": "BaseBdev4", 00:24:22.071 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:22.071 "is_configured": true, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 } 00:24:22.071 ] 00:24:22.071 }' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=365 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.071 "name": "raid_bdev1", 00:24:22.071 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:22.071 "strip_size_kb": 0, 00:24:22.071 "state": "online", 00:24:22.071 "raid_level": "raid1", 00:24:22.071 "superblock": false, 00:24:22.071 "num_base_bdevs": 4, 00:24:22.071 "num_base_bdevs_discovered": 3, 00:24:22.071 "num_base_bdevs_operational": 3, 00:24:22.071 "process": { 00:24:22.071 "type": "rebuild", 00:24:22.071 "target": "spare", 00:24:22.071 "progress": { 00:24:22.071 "blocks": 26624, 00:24:22.071 "percent": 40 00:24:22.071 } 00:24:22.071 }, 00:24:22.071 "base_bdevs_list": [ 00:24:22.071 { 00:24:22.071 "name": "spare", 00:24:22.071 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:22.071 "is_configured": true, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 }, 00:24:22.071 { 00:24:22.071 "name": null, 00:24:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.071 "is_configured": false, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 }, 00:24:22.071 { 00:24:22.071 "name": "BaseBdev3", 00:24:22.071 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:22.071 "is_configured": true, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 }, 00:24:22.071 { 00:24:22.071 "name": "BaseBdev4", 00:24:22.071 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:22.071 "is_configured": true, 00:24:22.071 "data_offset": 0, 00:24:22.071 "data_size": 65536 00:24:22.071 } 00:24:22.071 ] 00:24:22.071 }' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.071 23:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.005 23:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.263 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.263 "name": "raid_bdev1", 00:24:23.263 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:23.263 "strip_size_kb": 0, 00:24:23.263 "state": "online", 00:24:23.263 "raid_level": "raid1", 00:24:23.263 "superblock": false, 00:24:23.263 "num_base_bdevs": 4, 00:24:23.263 "num_base_bdevs_discovered": 3, 00:24:23.263 "num_base_bdevs_operational": 3, 00:24:23.263 "process": { 00:24:23.263 "type": "rebuild", 00:24:23.263 "target": "spare", 00:24:23.263 "progress": { 00:24:23.263 "blocks": 49152, 00:24:23.263 "percent": 75 00:24:23.263 } 00:24:23.263 }, 00:24:23.263 "base_bdevs_list": [ 00:24:23.263 { 00:24:23.263 "name": "spare", 00:24:23.263 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:23.263 "is_configured": true, 00:24:23.263 "data_offset": 0, 00:24:23.263 "data_size": 65536 00:24:23.263 }, 00:24:23.263 { 00:24:23.263 "name": null, 00:24:23.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.263 "is_configured": false, 00:24:23.263 "data_offset": 0, 00:24:23.263 "data_size": 65536 00:24:23.263 }, 00:24:23.263 { 00:24:23.263 "name": "BaseBdev3", 00:24:23.263 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:23.263 "is_configured": true, 00:24:23.263 "data_offset": 0, 00:24:23.263 "data_size": 65536 00:24:23.263 }, 00:24:23.263 { 00:24:23.263 "name": "BaseBdev4", 00:24:23.263 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:23.263 "is_configured": true, 00:24:23.263 "data_offset": 0, 00:24:23.263 "data_size": 65536 00:24:23.263 } 00:24:23.263 ] 00:24:23.263 }' 00:24:23.263 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.263 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:23.263 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:23.263 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:23.263 23:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:23.828 [2024-12-09 23:05:59.136902] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:23.828 [2024-12-09 23:05:59.136985] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:23.828 [2024-12-09 23:05:59.137023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.085 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.343 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.343 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:24.343 "name": "raid_bdev1", 00:24:24.343 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:24.343 "strip_size_kb": 0, 00:24:24.343 "state": "online", 00:24:24.343 "raid_level": "raid1", 00:24:24.343 "superblock": false, 00:24:24.343 "num_base_bdevs": 4, 00:24:24.343 "num_base_bdevs_discovered": 3, 00:24:24.343 "num_base_bdevs_operational": 3, 00:24:24.343 "base_bdevs_list": [ 00:24:24.343 { 00:24:24.344 "name": "spare", 00:24:24.344 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": null, 00:24:24.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.344 "is_configured": false, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": "BaseBdev3", 00:24:24.344 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": "BaseBdev4", 00:24:24.344 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 } 00:24:24.344 ] 00:24:24.344 }' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:24.344 "name": "raid_bdev1", 00:24:24.344 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:24.344 "strip_size_kb": 0, 00:24:24.344 "state": "online", 00:24:24.344 "raid_level": "raid1", 00:24:24.344 "superblock": false, 00:24:24.344 "num_base_bdevs": 4, 00:24:24.344 "num_base_bdevs_discovered": 3, 00:24:24.344 "num_base_bdevs_operational": 3, 00:24:24.344 "base_bdevs_list": [ 00:24:24.344 { 00:24:24.344 "name": "spare", 00:24:24.344 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": null, 00:24:24.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.344 "is_configured": false, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": "BaseBdev3", 00:24:24.344 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": "BaseBdev4", 00:24:24.344 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 } 00:24:24.344 ] 00:24:24.344 }' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.344 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.344 "name": "raid_bdev1", 00:24:24.344 "uuid": "cb3033a5-72c2-430a-abd1-9e4b3f8e6dec", 00:24:24.344 "strip_size_kb": 0, 00:24:24.344 "state": "online", 00:24:24.344 "raid_level": "raid1", 00:24:24.344 "superblock": false, 00:24:24.344 "num_base_bdevs": 4, 00:24:24.344 "num_base_bdevs_discovered": 3, 00:24:24.344 "num_base_bdevs_operational": 3, 00:24:24.344 "base_bdevs_list": [ 00:24:24.344 { 00:24:24.344 "name": "spare", 00:24:24.344 "uuid": "c51e7f32-a59e-5e92-9f92-32dcb9c47544", 00:24:24.344 "is_configured": true, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": null, 00:24:24.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.344 "is_configured": false, 00:24:24.344 "data_offset": 0, 00:24:24.344 "data_size": 65536 00:24:24.344 }, 00:24:24.344 { 00:24:24.344 "name": "BaseBdev3", 00:24:24.344 "uuid": "5adc74f8-7432-58a6-83c8-5de2c877c9d5", 00:24:24.344 "is_configured": true, 00:24:24.345 "data_offset": 0, 00:24:24.345 "data_size": 65536 00:24:24.345 }, 00:24:24.345 { 00:24:24.345 "name": "BaseBdev4", 00:24:24.345 "uuid": "9e1ae0d4-5904-5e1c-9651-20510275185a", 00:24:24.345 "is_configured": true, 00:24:24.345 "data_offset": 0, 00:24:24.345 "data_size": 65536 00:24:24.345 } 00:24:24.345 ] 00:24:24.345 }' 00:24:24.345 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.345 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.909 [2024-12-09 23:05:59.981465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.909 [2024-12-09 23:05:59.981489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.909 [2024-12-09 23:05:59.981548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.909 [2024-12-09 23:05:59.981614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.909 [2024-12-09 23:05:59.981622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.909 23:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:24.909 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:24.910 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:24.910 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:24.910 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:25.167 /dev/nbd0 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:25.167 1+0 records in 00:24:25.167 1+0 records out 00:24:25.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217267 s, 18.9 MB/s 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:25.167 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:25.423 /dev/nbd1 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:25.423 1+0 records in 00:24:25.423 1+0 records out 00:24:25.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340803 s, 12.0 MB/s 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:25.423 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:25.680 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:25.680 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:25.680 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:25.681 23:06:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75459 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75459 ']' 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75459 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75459 00:24:25.938 killing process with pid 75459 00:24:25.938 Received shutdown signal, test time was about 60.000000 seconds 00:24:25.938 00:24:25.938 Latency(us) 00:24:25.938 [2024-12-09T23:06:01.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.938 [2024-12-09T23:06:01.301Z] =================================================================================================================== 00:24:25.938 [2024-12-09T23:06:01.301Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75459' 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75459 00:24:25.938 [2024-12-09 23:06:01.131623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:25.938 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75459 00:24:26.195 [2024-12-09 23:06:01.373442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:26.761 23:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:24:26.761 00:24:26.761 real 0m15.326s 00:24:26.761 user 0m16.913s 00:24:26.761 sys 0m2.458s 00:24:26.761 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.761 ************************************ 00:24:26.761 END TEST raid_rebuild_test 00:24:26.761 ************************************ 00:24:26.761 23:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.761 23:06:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:24:26.761 23:06:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:26.761 23:06:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.761 23:06:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:26.761 ************************************ 00:24:26.761 START TEST raid_rebuild_test_sb 00:24:26.761 ************************************ 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75883 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75883 00:24:26.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.761 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75883 ']' 00:24:26.762 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.762 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.762 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.762 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.762 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.762 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:26.762 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:26.762 Zero copy mechanism will not be used. 00:24:26.762 [2024-12-09 23:06:02.071170] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:24:26.762 [2024-12-09 23:06:02.071295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75883 ] 00:24:27.020 [2024-12-09 23:06:02.231223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.020 [2024-12-09 23:06:02.330977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.277 [2024-12-09 23:06:02.465878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.277 [2024-12-09 23:06:02.465907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.602 BaseBdev1_malloc 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.602 [2024-12-09 23:06:02.939563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:27.602 [2024-12-09 23:06:02.939622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.602 [2024-12-09 23:06:02.939643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:27.602 [2024-12-09 23:06:02.939654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.602 [2024-12-09 23:06:02.941758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.602 [2024-12-09 23:06:02.941902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:27.602 BaseBdev1 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.602 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 BaseBdev2_malloc 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 [2024-12-09 23:06:02.975595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:27.860 [2024-12-09 23:06:02.975655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.860 [2024-12-09 23:06:02.975675] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:27.860 [2024-12-09 23:06:02.975686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.860 [2024-12-09 23:06:02.977804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.860 [2024-12-09 23:06:02.977839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:27.860 BaseBdev2 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 BaseBdev3_malloc 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 [2024-12-09 23:06:03.021720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:27.860 [2024-12-09 23:06:03.021777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.860 [2024-12-09 23:06:03.021797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:27.860 [2024-12-09 23:06:03.021809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.860 [2024-12-09 23:06:03.023880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.860 [2024-12-09 23:06:03.024047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:27.860 BaseBdev3 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 BaseBdev4_malloc 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 [2024-12-09 23:06:03.057602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:27.860 [2024-12-09 23:06:03.057661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.860 [2024-12-09 23:06:03.057678] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:27.860 [2024-12-09 23:06:03.057687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.860 [2024-12-09 23:06:03.059829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.860 [2024-12-09 23:06:03.059870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:27.860 BaseBdev4 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 spare_malloc 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 spare_delay 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 [2024-12-09 23:06:03.101898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:27.860 [2024-12-09 23:06:03.101949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.860 [2024-12-09 23:06:03.101965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:27.860 [2024-12-09 23:06:03.101975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.860 [2024-12-09 23:06:03.104029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.860 [2024-12-09 23:06:03.104064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:27.860 spare 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.860 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.860 [2024-12-09 23:06:03.109948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.861 [2024-12-09 23:06:03.111738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:27.861 [2024-12-09 23:06:03.111797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:27.861 [2024-12-09 23:06:03.111848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:27.861 [2024-12-09 23:06:03.112023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:27.861 [2024-12-09 23:06:03.112037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:27.861 [2024-12-09 23:06:03.112299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:27.861 [2024-12-09 23:06:03.112448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:27.861 [2024-12-09 23:06:03.112462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:27.861 [2024-12-09 23:06:03.112616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.861 "name": "raid_bdev1", 00:24:27.861 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:27.861 "strip_size_kb": 0, 00:24:27.861 "state": "online", 00:24:27.861 "raid_level": "raid1", 00:24:27.861 "superblock": true, 00:24:27.861 "num_base_bdevs": 4, 00:24:27.861 "num_base_bdevs_discovered": 4, 00:24:27.861 "num_base_bdevs_operational": 4, 00:24:27.861 "base_bdevs_list": [ 00:24:27.861 { 00:24:27.861 "name": "BaseBdev1", 00:24:27.861 "uuid": "125e42b7-22ab-5943-9867-c3b2df8efa6e", 00:24:27.861 "is_configured": true, 00:24:27.861 "data_offset": 2048, 00:24:27.861 "data_size": 63488 00:24:27.861 }, 00:24:27.861 { 00:24:27.861 "name": "BaseBdev2", 00:24:27.861 "uuid": "904add92-23d4-5c97-990d-e6db4d41d1f9", 00:24:27.861 "is_configured": true, 00:24:27.861 "data_offset": 2048, 00:24:27.861 "data_size": 63488 00:24:27.861 }, 00:24:27.861 { 00:24:27.861 "name": "BaseBdev3", 00:24:27.861 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:27.861 "is_configured": true, 00:24:27.861 "data_offset": 2048, 00:24:27.861 "data_size": 63488 00:24:27.861 }, 00:24:27.861 { 00:24:27.861 "name": "BaseBdev4", 00:24:27.861 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:27.861 "is_configured": true, 00:24:27.861 "data_offset": 2048, 00:24:27.861 "data_size": 63488 00:24:27.861 } 00:24:27.861 ] 00:24:27.861 }' 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.861 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.119 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:28.119 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.119 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.119 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:28.119 [2024-12-09 23:06:03.454406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.119 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.119 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:28.375 [2024-12-09 23:06:03.698140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:28.375 /dev/nbd0 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:28.375 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:28.632 1+0 records in 00:24:28.632 1+0 records out 00:24:28.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558756 s, 7.3 MB/s 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:28.632 23:06:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:33.994 63488+0 records in 00:24:33.994 63488+0 records out 00:24:33.994 32505856 bytes (33 MB, 31 MiB) copied, 5.40806 s, 6.0 MB/s 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:33.994 [2024-12-09 23:06:09.321632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.994 [2024-12-09 23:06:09.345699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.994 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.254 "name": "raid_bdev1", 00:24:34.254 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:34.254 "strip_size_kb": 0, 00:24:34.254 "state": "online", 00:24:34.254 "raid_level": "raid1", 00:24:34.254 "superblock": true, 00:24:34.254 "num_base_bdevs": 4, 00:24:34.254 "num_base_bdevs_discovered": 3, 00:24:34.254 "num_base_bdevs_operational": 3, 00:24:34.254 "base_bdevs_list": [ 00:24:34.254 { 00:24:34.254 "name": null, 00:24:34.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.254 "is_configured": false, 00:24:34.254 "data_offset": 0, 00:24:34.254 "data_size": 63488 00:24:34.254 }, 00:24:34.254 { 00:24:34.254 "name": "BaseBdev2", 00:24:34.254 "uuid": "904add92-23d4-5c97-990d-e6db4d41d1f9", 00:24:34.254 "is_configured": true, 00:24:34.254 "data_offset": 2048, 00:24:34.254 "data_size": 63488 00:24:34.254 }, 00:24:34.254 { 00:24:34.254 "name": "BaseBdev3", 00:24:34.254 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:34.254 "is_configured": true, 00:24:34.254 "data_offset": 2048, 00:24:34.254 "data_size": 63488 00:24:34.254 }, 00:24:34.254 { 00:24:34.254 "name": "BaseBdev4", 00:24:34.254 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:34.254 "is_configured": true, 00:24:34.254 "data_offset": 2048, 00:24:34.254 "data_size": 63488 00:24:34.254 } 00:24:34.254 ] 00:24:34.254 }' 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.254 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.510 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:34.510 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.510 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.510 [2024-12-09 23:06:09.645749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:34.510 [2024-12-09 23:06:09.654155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:24:34.510 23:06:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.510 23:06:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:34.510 [2024-12-09 23:06:09.655774] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:35.441 "name": "raid_bdev1", 00:24:35.441 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:35.441 "strip_size_kb": 0, 00:24:35.441 "state": "online", 00:24:35.441 "raid_level": "raid1", 00:24:35.441 "superblock": true, 00:24:35.441 "num_base_bdevs": 4, 00:24:35.441 "num_base_bdevs_discovered": 4, 00:24:35.441 "num_base_bdevs_operational": 4, 00:24:35.441 "process": { 00:24:35.441 "type": "rebuild", 00:24:35.441 "target": "spare", 00:24:35.441 "progress": { 00:24:35.441 "blocks": 20480, 00:24:35.441 "percent": 32 00:24:35.441 } 00:24:35.441 }, 00:24:35.441 "base_bdevs_list": [ 00:24:35.441 { 00:24:35.441 "name": "spare", 00:24:35.441 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:35.441 "is_configured": true, 00:24:35.441 "data_offset": 2048, 00:24:35.441 "data_size": 63488 00:24:35.441 }, 00:24:35.441 { 00:24:35.441 "name": "BaseBdev2", 00:24:35.441 "uuid": "904add92-23d4-5c97-990d-e6db4d41d1f9", 00:24:35.441 "is_configured": true, 00:24:35.441 "data_offset": 2048, 00:24:35.441 "data_size": 63488 00:24:35.441 }, 00:24:35.441 { 00:24:35.441 "name": "BaseBdev3", 00:24:35.441 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:35.441 "is_configured": true, 00:24:35.441 "data_offset": 2048, 00:24:35.441 "data_size": 63488 00:24:35.441 }, 00:24:35.441 { 00:24:35.441 "name": "BaseBdev4", 00:24:35.441 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:35.441 "is_configured": true, 00:24:35.441 "data_offset": 2048, 00:24:35.441 "data_size": 63488 00:24:35.441 } 00:24:35.441 ] 00:24:35.441 }' 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.441 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.441 [2024-12-09 23:06:10.765718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:35.701 [2024-12-09 23:06:10.861313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:35.701 [2024-12-09 23:06:10.861537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.701 [2024-12-09 23:06:10.861556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:35.701 [2024-12-09 23:06:10.861565] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.701 "name": "raid_bdev1", 00:24:35.701 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:35.701 "strip_size_kb": 0, 00:24:35.701 "state": "online", 00:24:35.701 "raid_level": "raid1", 00:24:35.701 "superblock": true, 00:24:35.701 "num_base_bdevs": 4, 00:24:35.701 "num_base_bdevs_discovered": 3, 00:24:35.701 "num_base_bdevs_operational": 3, 00:24:35.701 "base_bdevs_list": [ 00:24:35.701 { 00:24:35.701 "name": null, 00:24:35.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.701 "is_configured": false, 00:24:35.701 "data_offset": 0, 00:24:35.701 "data_size": 63488 00:24:35.701 }, 00:24:35.701 { 00:24:35.701 "name": "BaseBdev2", 00:24:35.701 "uuid": "904add92-23d4-5c97-990d-e6db4d41d1f9", 00:24:35.701 "is_configured": true, 00:24:35.701 "data_offset": 2048, 00:24:35.701 "data_size": 63488 00:24:35.701 }, 00:24:35.701 { 00:24:35.701 "name": "BaseBdev3", 00:24:35.701 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:35.701 "is_configured": true, 00:24:35.701 "data_offset": 2048, 00:24:35.701 "data_size": 63488 00:24:35.701 }, 00:24:35.701 { 00:24:35.701 "name": "BaseBdev4", 00:24:35.701 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:35.701 "is_configured": true, 00:24:35.701 "data_offset": 2048, 00:24:35.701 "data_size": 63488 00:24:35.701 } 00:24:35.701 ] 00:24:35.701 }' 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.701 23:06:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.958 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:35.958 "name": "raid_bdev1", 00:24:35.958 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:35.958 "strip_size_kb": 0, 00:24:35.958 "state": "online", 00:24:35.958 "raid_level": "raid1", 00:24:35.958 "superblock": true, 00:24:35.958 "num_base_bdevs": 4, 00:24:35.958 "num_base_bdevs_discovered": 3, 00:24:35.958 "num_base_bdevs_operational": 3, 00:24:35.958 "base_bdevs_list": [ 00:24:35.958 { 00:24:35.958 "name": null, 00:24:35.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.958 "is_configured": false, 00:24:35.958 "data_offset": 0, 00:24:35.958 "data_size": 63488 00:24:35.958 }, 00:24:35.958 { 00:24:35.958 "name": "BaseBdev2", 00:24:35.958 "uuid": "904add92-23d4-5c97-990d-e6db4d41d1f9", 00:24:35.959 "is_configured": true, 00:24:35.959 "data_offset": 2048, 00:24:35.959 "data_size": 63488 00:24:35.959 }, 00:24:35.959 { 00:24:35.959 "name": "BaseBdev3", 00:24:35.959 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:35.959 "is_configured": true, 00:24:35.959 "data_offset": 2048, 00:24:35.959 "data_size": 63488 00:24:35.959 }, 00:24:35.959 { 00:24:35.959 "name": "BaseBdev4", 00:24:35.959 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:35.959 "is_configured": true, 00:24:35.959 "data_offset": 2048, 00:24:35.959 "data_size": 63488 00:24:35.959 } 00:24:35.959 ] 00:24:35.959 }' 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.959 [2024-12-09 23:06:11.265740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:35.959 [2024-12-09 23:06:11.273456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.959 23:06:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:35.959 [2024-12-09 23:06:11.275078] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.329 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.329 "name": "raid_bdev1", 00:24:37.329 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:37.329 "strip_size_kb": 0, 00:24:37.329 "state": "online", 00:24:37.329 "raid_level": "raid1", 00:24:37.329 "superblock": true, 00:24:37.329 "num_base_bdevs": 4, 00:24:37.329 "num_base_bdevs_discovered": 4, 00:24:37.329 "num_base_bdevs_operational": 4, 00:24:37.329 "process": { 00:24:37.329 "type": "rebuild", 00:24:37.329 "target": "spare", 00:24:37.329 "progress": { 00:24:37.329 "blocks": 20480, 00:24:37.329 "percent": 32 00:24:37.329 } 00:24:37.329 }, 00:24:37.329 "base_bdevs_list": [ 00:24:37.329 { 00:24:37.329 "name": "spare", 00:24:37.329 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:37.329 "is_configured": true, 00:24:37.329 "data_offset": 2048, 00:24:37.329 "data_size": 63488 00:24:37.329 }, 00:24:37.329 { 00:24:37.329 "name": "BaseBdev2", 00:24:37.329 "uuid": "904add92-23d4-5c97-990d-e6db4d41d1f9", 00:24:37.329 "is_configured": true, 00:24:37.329 "data_offset": 2048, 00:24:37.329 "data_size": 63488 00:24:37.329 }, 00:24:37.329 { 00:24:37.329 "name": "BaseBdev3", 00:24:37.329 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:37.329 "is_configured": true, 00:24:37.329 "data_offset": 2048, 00:24:37.330 "data_size": 63488 00:24:37.330 }, 00:24:37.330 { 00:24:37.330 "name": "BaseBdev4", 00:24:37.330 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:37.330 "is_configured": true, 00:24:37.330 "data_offset": 2048, 00:24:37.330 "data_size": 63488 00:24:37.330 } 00:24:37.330 ] 00:24:37.330 }' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:37.330 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.330 [2024-12-09 23:06:12.385360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:37.330 [2024-12-09 23:06:12.580624] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.330 "name": "raid_bdev1", 00:24:37.330 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:37.330 "strip_size_kb": 0, 00:24:37.330 "state": "online", 00:24:37.330 "raid_level": "raid1", 00:24:37.330 "superblock": true, 00:24:37.330 "num_base_bdevs": 4, 00:24:37.330 "num_base_bdevs_discovered": 3, 00:24:37.330 "num_base_bdevs_operational": 3, 00:24:37.330 "process": { 00:24:37.330 "type": "rebuild", 00:24:37.330 "target": "spare", 00:24:37.330 "progress": { 00:24:37.330 "blocks": 24576, 00:24:37.330 "percent": 38 00:24:37.330 } 00:24:37.330 }, 00:24:37.330 "base_bdevs_list": [ 00:24:37.330 { 00:24:37.330 "name": "spare", 00:24:37.330 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:37.330 "is_configured": true, 00:24:37.330 "data_offset": 2048, 00:24:37.330 "data_size": 63488 00:24:37.330 }, 00:24:37.330 { 00:24:37.330 "name": null, 00:24:37.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.330 "is_configured": false, 00:24:37.330 "data_offset": 0, 00:24:37.330 "data_size": 63488 00:24:37.330 }, 00:24:37.330 { 00:24:37.330 "name": "BaseBdev3", 00:24:37.330 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:37.330 "is_configured": true, 00:24:37.330 "data_offset": 2048, 00:24:37.330 "data_size": 63488 00:24:37.330 }, 00:24:37.330 { 00:24:37.330 "name": "BaseBdev4", 00:24:37.330 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:37.330 "is_configured": true, 00:24:37.330 "data_offset": 2048, 00:24:37.330 "data_size": 63488 00:24:37.330 } 00:24:37.330 ] 00:24:37.330 }' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.330 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.646 "name": "raid_bdev1", 00:24:37.646 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:37.646 "strip_size_kb": 0, 00:24:37.646 "state": "online", 00:24:37.646 "raid_level": "raid1", 00:24:37.646 "superblock": true, 00:24:37.646 "num_base_bdevs": 4, 00:24:37.646 "num_base_bdevs_discovered": 3, 00:24:37.646 "num_base_bdevs_operational": 3, 00:24:37.646 "process": { 00:24:37.646 "type": "rebuild", 00:24:37.646 "target": "spare", 00:24:37.646 "progress": { 00:24:37.646 "blocks": 26624, 00:24:37.646 "percent": 41 00:24:37.646 } 00:24:37.646 }, 00:24:37.646 "base_bdevs_list": [ 00:24:37.646 { 00:24:37.646 "name": "spare", 00:24:37.646 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:37.646 "is_configured": true, 00:24:37.646 "data_offset": 2048, 00:24:37.646 "data_size": 63488 00:24:37.646 }, 00:24:37.646 { 00:24:37.646 "name": null, 00:24:37.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.646 "is_configured": false, 00:24:37.646 "data_offset": 0, 00:24:37.646 "data_size": 63488 00:24:37.646 }, 00:24:37.646 { 00:24:37.646 "name": "BaseBdev3", 00:24:37.646 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:37.646 "is_configured": true, 00:24:37.646 "data_offset": 2048, 00:24:37.646 "data_size": 63488 00:24:37.646 }, 00:24:37.646 { 00:24:37.646 "name": "BaseBdev4", 00:24:37.646 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:37.646 "is_configured": true, 00:24:37.646 "data_offset": 2048, 00:24:37.646 "data_size": 63488 00:24:37.646 } 00:24:37.646 ] 00:24:37.646 }' 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.646 23:06:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.577 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:38.577 "name": "raid_bdev1", 00:24:38.577 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:38.577 "strip_size_kb": 0, 00:24:38.577 "state": "online", 00:24:38.577 "raid_level": "raid1", 00:24:38.577 "superblock": true, 00:24:38.577 "num_base_bdevs": 4, 00:24:38.577 "num_base_bdevs_discovered": 3, 00:24:38.577 "num_base_bdevs_operational": 3, 00:24:38.577 "process": { 00:24:38.577 "type": "rebuild", 00:24:38.577 "target": "spare", 00:24:38.577 "progress": { 00:24:38.578 "blocks": 47104, 00:24:38.578 "percent": 74 00:24:38.578 } 00:24:38.578 }, 00:24:38.578 "base_bdevs_list": [ 00:24:38.578 { 00:24:38.578 "name": "spare", 00:24:38.578 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 }, 00:24:38.578 { 00:24:38.578 "name": null, 00:24:38.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.578 "is_configured": false, 00:24:38.578 "data_offset": 0, 00:24:38.578 "data_size": 63488 00:24:38.578 }, 00:24:38.578 { 00:24:38.578 "name": "BaseBdev3", 00:24:38.578 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 }, 00:24:38.578 { 00:24:38.578 "name": "BaseBdev4", 00:24:38.578 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:38.578 "is_configured": true, 00:24:38.578 "data_offset": 2048, 00:24:38.578 "data_size": 63488 00:24:38.578 } 00:24:38.578 ] 00:24:38.578 }' 00:24:38.578 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:38.578 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:38.578 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:38.578 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:38.578 23:06:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:39.142 [2024-12-09 23:06:14.489220] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:39.142 [2024-12-09 23:06:14.489292] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:39.142 [2024-12-09 23:06:14.489407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:39.706 "name": "raid_bdev1", 00:24:39.706 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:39.706 "strip_size_kb": 0, 00:24:39.706 "state": "online", 00:24:39.706 "raid_level": "raid1", 00:24:39.706 "superblock": true, 00:24:39.706 "num_base_bdevs": 4, 00:24:39.706 "num_base_bdevs_discovered": 3, 00:24:39.706 "num_base_bdevs_operational": 3, 00:24:39.706 "base_bdevs_list": [ 00:24:39.706 { 00:24:39.706 "name": "spare", 00:24:39.706 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:39.706 "is_configured": true, 00:24:39.706 "data_offset": 2048, 00:24:39.706 "data_size": 63488 00:24:39.706 }, 00:24:39.706 { 00:24:39.706 "name": null, 00:24:39.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.706 "is_configured": false, 00:24:39.706 "data_offset": 0, 00:24:39.706 "data_size": 63488 00:24:39.706 }, 00:24:39.706 { 00:24:39.706 "name": "BaseBdev3", 00:24:39.706 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:39.706 "is_configured": true, 00:24:39.706 "data_offset": 2048, 00:24:39.706 "data_size": 63488 00:24:39.706 }, 00:24:39.706 { 00:24:39.706 "name": "BaseBdev4", 00:24:39.706 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:39.706 "is_configured": true, 00:24:39.706 "data_offset": 2048, 00:24:39.706 "data_size": 63488 00:24:39.706 } 00:24:39.706 ] 00:24:39.706 }' 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.706 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.707 23:06:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.707 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:39.707 "name": "raid_bdev1", 00:24:39.707 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:39.707 "strip_size_kb": 0, 00:24:39.707 "state": "online", 00:24:39.707 "raid_level": "raid1", 00:24:39.707 "superblock": true, 00:24:39.707 "num_base_bdevs": 4, 00:24:39.707 "num_base_bdevs_discovered": 3, 00:24:39.707 "num_base_bdevs_operational": 3, 00:24:39.707 "base_bdevs_list": [ 00:24:39.707 { 00:24:39.707 "name": "spare", 00:24:39.707 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:39.707 "is_configured": true, 00:24:39.707 "data_offset": 2048, 00:24:39.707 "data_size": 63488 00:24:39.707 }, 00:24:39.707 { 00:24:39.707 "name": null, 00:24:39.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.707 "is_configured": false, 00:24:39.707 "data_offset": 0, 00:24:39.707 "data_size": 63488 00:24:39.707 }, 00:24:39.707 { 00:24:39.707 "name": "BaseBdev3", 00:24:39.707 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:39.707 "is_configured": true, 00:24:39.707 "data_offset": 2048, 00:24:39.707 "data_size": 63488 00:24:39.707 }, 00:24:39.707 { 00:24:39.707 "name": "BaseBdev4", 00:24:39.707 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:39.707 "is_configured": true, 00:24:39.707 "data_offset": 2048, 00:24:39.707 "data_size": 63488 00:24:39.707 } 00:24:39.707 ] 00:24:39.707 }' 00:24:39.707 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:39.707 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:39.707 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:39.964 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:39.964 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.965 "name": "raid_bdev1", 00:24:39.965 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:39.965 "strip_size_kb": 0, 00:24:39.965 "state": "online", 00:24:39.965 "raid_level": "raid1", 00:24:39.965 "superblock": true, 00:24:39.965 "num_base_bdevs": 4, 00:24:39.965 "num_base_bdevs_discovered": 3, 00:24:39.965 "num_base_bdevs_operational": 3, 00:24:39.965 "base_bdevs_list": [ 00:24:39.965 { 00:24:39.965 "name": "spare", 00:24:39.965 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 2048, 00:24:39.965 "data_size": 63488 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "name": null, 00:24:39.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.965 "is_configured": false, 00:24:39.965 "data_offset": 0, 00:24:39.965 "data_size": 63488 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "name": "BaseBdev3", 00:24:39.965 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 2048, 00:24:39.965 "data_size": 63488 00:24:39.965 }, 00:24:39.965 { 00:24:39.965 "name": "BaseBdev4", 00:24:39.965 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:39.965 "is_configured": true, 00:24:39.965 "data_offset": 2048, 00:24:39.965 "data_size": 63488 00:24:39.965 } 00:24:39.965 ] 00:24:39.965 }' 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.965 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.224 [2024-12-09 23:06:15.381665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:40.224 [2024-12-09 23:06:15.381689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.224 [2024-12-09 23:06:15.381755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.224 [2024-12-09 23:06:15.381820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.224 [2024-12-09 23:06:15.381829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:40.224 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:40.224 /dev/nbd0 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:40.482 1+0 records in 00:24:40.482 1+0 records out 00:24:40.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264151 s, 15.5 MB/s 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:40.482 /dev/nbd1 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:40.482 1+0 records in 00:24:40.482 1+0 records out 00:24:40.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285618 s, 14.3 MB/s 00:24:40.482 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:40.740 23:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.998 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.998 [2024-12-09 23:06:16.358453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:40.998 [2024-12-09 23:06:16.358501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.998 [2024-12-09 23:06:16.358519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:40.998 [2024-12-09 23:06:16.358527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.256 [2024-12-09 23:06:16.360376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.256 [2024-12-09 23:06:16.360406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:41.256 [2024-12-09 23:06:16.360493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:41.256 [2024-12-09 23:06:16.360530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:41.256 [2024-12-09 23:06:16.360637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:41.256 [2024-12-09 23:06:16.360712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:41.256 spare 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.256 [2024-12-09 23:06:16.460788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:41.256 [2024-12-09 23:06:16.460822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:41.256 [2024-12-09 23:06:16.461263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:24:41.256 [2024-12-09 23:06:16.461453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:41.256 [2024-12-09 23:06:16.461481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:41.256 [2024-12-09 23:06:16.461723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.256 "name": "raid_bdev1", 00:24:41.256 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:41.256 "strip_size_kb": 0, 00:24:41.256 "state": "online", 00:24:41.256 "raid_level": "raid1", 00:24:41.256 "superblock": true, 00:24:41.256 "num_base_bdevs": 4, 00:24:41.256 "num_base_bdevs_discovered": 3, 00:24:41.256 "num_base_bdevs_operational": 3, 00:24:41.256 "base_bdevs_list": [ 00:24:41.256 { 00:24:41.256 "name": "spare", 00:24:41.256 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:41.256 "is_configured": true, 00:24:41.256 "data_offset": 2048, 00:24:41.256 "data_size": 63488 00:24:41.256 }, 00:24:41.256 { 00:24:41.256 "name": null, 00:24:41.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.256 "is_configured": false, 00:24:41.256 "data_offset": 2048, 00:24:41.256 "data_size": 63488 00:24:41.256 }, 00:24:41.256 { 00:24:41.256 "name": "BaseBdev3", 00:24:41.256 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:41.256 "is_configured": true, 00:24:41.256 "data_offset": 2048, 00:24:41.256 "data_size": 63488 00:24:41.256 }, 00:24:41.256 { 00:24:41.256 "name": "BaseBdev4", 00:24:41.256 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:41.256 "is_configured": true, 00:24:41.256 "data_offset": 2048, 00:24:41.256 "data_size": 63488 00:24:41.256 } 00:24:41.256 ] 00:24:41.256 }' 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.256 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.513 "name": "raid_bdev1", 00:24:41.513 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:41.513 "strip_size_kb": 0, 00:24:41.513 "state": "online", 00:24:41.513 "raid_level": "raid1", 00:24:41.513 "superblock": true, 00:24:41.513 "num_base_bdevs": 4, 00:24:41.513 "num_base_bdevs_discovered": 3, 00:24:41.513 "num_base_bdevs_operational": 3, 00:24:41.513 "base_bdevs_list": [ 00:24:41.513 { 00:24:41.513 "name": "spare", 00:24:41.513 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:41.513 "is_configured": true, 00:24:41.513 "data_offset": 2048, 00:24:41.513 "data_size": 63488 00:24:41.513 }, 00:24:41.513 { 00:24:41.513 "name": null, 00:24:41.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.513 "is_configured": false, 00:24:41.513 "data_offset": 2048, 00:24:41.513 "data_size": 63488 00:24:41.513 }, 00:24:41.513 { 00:24:41.513 "name": "BaseBdev3", 00:24:41.513 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:41.513 "is_configured": true, 00:24:41.513 "data_offset": 2048, 00:24:41.513 "data_size": 63488 00:24:41.513 }, 00:24:41.513 { 00:24:41.513 "name": "BaseBdev4", 00:24:41.513 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:41.513 "is_configured": true, 00:24:41.513 "data_offset": 2048, 00:24:41.513 "data_size": 63488 00:24:41.513 } 00:24:41.513 ] 00:24:41.513 }' 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:41.513 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.770 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.771 [2024-12-09 23:06:16.926621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.771 "name": "raid_bdev1", 00:24:41.771 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:41.771 "strip_size_kb": 0, 00:24:41.771 "state": "online", 00:24:41.771 "raid_level": "raid1", 00:24:41.771 "superblock": true, 00:24:41.771 "num_base_bdevs": 4, 00:24:41.771 "num_base_bdevs_discovered": 2, 00:24:41.771 "num_base_bdevs_operational": 2, 00:24:41.771 "base_bdevs_list": [ 00:24:41.771 { 00:24:41.771 "name": null, 00:24:41.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.771 "is_configured": false, 00:24:41.771 "data_offset": 0, 00:24:41.771 "data_size": 63488 00:24:41.771 }, 00:24:41.771 { 00:24:41.771 "name": null, 00:24:41.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.771 "is_configured": false, 00:24:41.771 "data_offset": 2048, 00:24:41.771 "data_size": 63488 00:24:41.771 }, 00:24:41.771 { 00:24:41.771 "name": "BaseBdev3", 00:24:41.771 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:41.771 "is_configured": true, 00:24:41.771 "data_offset": 2048, 00:24:41.771 "data_size": 63488 00:24:41.771 }, 00:24:41.771 { 00:24:41.771 "name": "BaseBdev4", 00:24:41.771 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:41.771 "is_configured": true, 00:24:41.771 "data_offset": 2048, 00:24:41.771 "data_size": 63488 00:24:41.771 } 00:24:41.771 ] 00:24:41.771 }' 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.771 23:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:42.028 23:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.028 23:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.028 23:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:42.028 [2024-12-09 23:06:17.266688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.028 [2024-12-09 23:06:17.266837] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:42.028 [2024-12-09 23:06:17.266851] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:42.028 [2024-12-09 23:06:17.266884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.028 [2024-12-09 23:06:17.274694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:24:42.028 23:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.028 23:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:42.028 [2024-12-09 23:06:17.276398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:42.961 "name": "raid_bdev1", 00:24:42.961 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:42.961 "strip_size_kb": 0, 00:24:42.961 "state": "online", 00:24:42.961 "raid_level": "raid1", 00:24:42.961 "superblock": true, 00:24:42.961 "num_base_bdevs": 4, 00:24:42.961 "num_base_bdevs_discovered": 3, 00:24:42.961 "num_base_bdevs_operational": 3, 00:24:42.961 "process": { 00:24:42.961 "type": "rebuild", 00:24:42.961 "target": "spare", 00:24:42.961 "progress": { 00:24:42.961 "blocks": 20480, 00:24:42.961 "percent": 32 00:24:42.961 } 00:24:42.961 }, 00:24:42.961 "base_bdevs_list": [ 00:24:42.961 { 00:24:42.961 "name": "spare", 00:24:42.961 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:42.961 "is_configured": true, 00:24:42.961 "data_offset": 2048, 00:24:42.961 "data_size": 63488 00:24:42.961 }, 00:24:42.961 { 00:24:42.961 "name": null, 00:24:42.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.961 "is_configured": false, 00:24:42.961 "data_offset": 2048, 00:24:42.961 "data_size": 63488 00:24:42.961 }, 00:24:42.961 { 00:24:42.961 "name": "BaseBdev3", 00:24:42.961 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:42.961 "is_configured": true, 00:24:42.961 "data_offset": 2048, 00:24:42.961 "data_size": 63488 00:24:42.961 }, 00:24:42.961 { 00:24:42.961 "name": "BaseBdev4", 00:24:42.961 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:42.961 "is_configured": true, 00:24:42.961 "data_offset": 2048, 00:24:42.961 "data_size": 63488 00:24:42.961 } 00:24:42.961 ] 00:24:42.961 }' 00:24:42.961 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.218 [2024-12-09 23:06:18.398355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.218 [2024-12-09 23:06:18.481982] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:43.218 [2024-12-09 23:06:18.482220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.218 [2024-12-09 23:06:18.482241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.218 [2024-12-09 23:06:18.482248] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.218 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.219 "name": "raid_bdev1", 00:24:43.219 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:43.219 "strip_size_kb": 0, 00:24:43.219 "state": "online", 00:24:43.219 "raid_level": "raid1", 00:24:43.219 "superblock": true, 00:24:43.219 "num_base_bdevs": 4, 00:24:43.219 "num_base_bdevs_discovered": 2, 00:24:43.219 "num_base_bdevs_operational": 2, 00:24:43.219 "base_bdevs_list": [ 00:24:43.219 { 00:24:43.219 "name": null, 00:24:43.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.219 "is_configured": false, 00:24:43.219 "data_offset": 0, 00:24:43.219 "data_size": 63488 00:24:43.219 }, 00:24:43.219 { 00:24:43.219 "name": null, 00:24:43.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.219 "is_configured": false, 00:24:43.219 "data_offset": 2048, 00:24:43.219 "data_size": 63488 00:24:43.219 }, 00:24:43.219 { 00:24:43.219 "name": "BaseBdev3", 00:24:43.219 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:43.219 "is_configured": true, 00:24:43.219 "data_offset": 2048, 00:24:43.219 "data_size": 63488 00:24:43.219 }, 00:24:43.219 { 00:24:43.219 "name": "BaseBdev4", 00:24:43.219 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:43.219 "is_configured": true, 00:24:43.219 "data_offset": 2048, 00:24:43.219 "data_size": 63488 00:24:43.219 } 00:24:43.219 ] 00:24:43.219 }' 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.219 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.475 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:43.475 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.475 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.475 [2024-12-09 23:06:18.790557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:43.475 [2024-12-09 23:06:18.790713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.475 [2024-12-09 23:06:18.790743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:43.475 [2024-12-09 23:06:18.790751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.475 [2024-12-09 23:06:18.791146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.475 [2024-12-09 23:06:18.791160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:43.475 [2024-12-09 23:06:18.791237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:43.475 [2024-12-09 23:06:18.791247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:43.475 [2024-12-09 23:06:18.791259] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:43.475 [2024-12-09 23:06:18.791277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:43.475 [2024-12-09 23:06:18.798984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:24:43.475 spare 00:24:43.475 23:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.475 23:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:43.475 [2024-12-09 23:06:18.800591] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.847 "name": "raid_bdev1", 00:24:44.847 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:44.847 "strip_size_kb": 0, 00:24:44.847 "state": "online", 00:24:44.847 "raid_level": "raid1", 00:24:44.847 "superblock": true, 00:24:44.847 "num_base_bdevs": 4, 00:24:44.847 "num_base_bdevs_discovered": 3, 00:24:44.847 "num_base_bdevs_operational": 3, 00:24:44.847 "process": { 00:24:44.847 "type": "rebuild", 00:24:44.847 "target": "spare", 00:24:44.847 "progress": { 00:24:44.847 "blocks": 20480, 00:24:44.847 "percent": 32 00:24:44.847 } 00:24:44.847 }, 00:24:44.847 "base_bdevs_list": [ 00:24:44.847 { 00:24:44.847 "name": "spare", 00:24:44.847 "uuid": "b6368b21-e12b-5e34-9cf6-b6147f3464f8", 00:24:44.847 "is_configured": true, 00:24:44.847 "data_offset": 2048, 00:24:44.847 "data_size": 63488 00:24:44.847 }, 00:24:44.847 { 00:24:44.847 "name": null, 00:24:44.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.847 "is_configured": false, 00:24:44.847 "data_offset": 2048, 00:24:44.847 "data_size": 63488 00:24:44.847 }, 00:24:44.847 { 00:24:44.847 "name": "BaseBdev3", 00:24:44.847 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:44.847 "is_configured": true, 00:24:44.847 "data_offset": 2048, 00:24:44.847 "data_size": 63488 00:24:44.847 }, 00:24:44.847 { 00:24:44.847 "name": "BaseBdev4", 00:24:44.847 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:44.847 "is_configured": true, 00:24:44.847 "data_offset": 2048, 00:24:44.847 "data_size": 63488 00:24:44.847 } 00:24:44.847 ] 00:24:44.847 }' 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.847 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:44.847 [2024-12-09 23:06:19.902971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:44.848 [2024-12-09 23:06:19.905706] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:44.848 [2024-12-09 23:06:19.905866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.848 [2024-12-09 23:06:19.905882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:44.848 [2024-12-09 23:06:19.905890] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.848 "name": "raid_bdev1", 00:24:44.848 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:44.848 "strip_size_kb": 0, 00:24:44.848 "state": "online", 00:24:44.848 "raid_level": "raid1", 00:24:44.848 "superblock": true, 00:24:44.848 "num_base_bdevs": 4, 00:24:44.848 "num_base_bdevs_discovered": 2, 00:24:44.848 "num_base_bdevs_operational": 2, 00:24:44.848 "base_bdevs_list": [ 00:24:44.848 { 00:24:44.848 "name": null, 00:24:44.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.848 "is_configured": false, 00:24:44.848 "data_offset": 0, 00:24:44.848 "data_size": 63488 00:24:44.848 }, 00:24:44.848 { 00:24:44.848 "name": null, 00:24:44.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.848 "is_configured": false, 00:24:44.848 "data_offset": 2048, 00:24:44.848 "data_size": 63488 00:24:44.848 }, 00:24:44.848 { 00:24:44.848 "name": "BaseBdev3", 00:24:44.848 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:44.848 "is_configured": true, 00:24:44.848 "data_offset": 2048, 00:24:44.848 "data_size": 63488 00:24:44.848 }, 00:24:44.848 { 00:24:44.848 "name": "BaseBdev4", 00:24:44.848 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:44.848 "is_configured": true, 00:24:44.848 "data_offset": 2048, 00:24:44.848 "data_size": 63488 00:24:44.848 } 00:24:44.848 ] 00:24:44.848 }' 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.848 23:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.104 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.104 "name": "raid_bdev1", 00:24:45.105 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:45.105 "strip_size_kb": 0, 00:24:45.105 "state": "online", 00:24:45.105 "raid_level": "raid1", 00:24:45.105 "superblock": true, 00:24:45.105 "num_base_bdevs": 4, 00:24:45.105 "num_base_bdevs_discovered": 2, 00:24:45.105 "num_base_bdevs_operational": 2, 00:24:45.105 "base_bdevs_list": [ 00:24:45.105 { 00:24:45.105 "name": null, 00:24:45.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.105 "is_configured": false, 00:24:45.105 "data_offset": 0, 00:24:45.105 "data_size": 63488 00:24:45.105 }, 00:24:45.105 { 00:24:45.105 "name": null, 00:24:45.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.105 "is_configured": false, 00:24:45.105 "data_offset": 2048, 00:24:45.105 "data_size": 63488 00:24:45.105 }, 00:24:45.105 { 00:24:45.105 "name": "BaseBdev3", 00:24:45.105 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:45.105 "is_configured": true, 00:24:45.105 "data_offset": 2048, 00:24:45.105 "data_size": 63488 00:24:45.105 }, 00:24:45.105 { 00:24:45.105 "name": "BaseBdev4", 00:24:45.105 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:45.105 "is_configured": true, 00:24:45.105 "data_offset": 2048, 00:24:45.105 "data_size": 63488 00:24:45.105 } 00:24:45.105 ] 00:24:45.105 }' 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.105 [2024-12-09 23:06:20.334050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:45.105 [2024-12-09 23:06:20.334111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.105 [2024-12-09 23:06:20.334128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:45.105 [2024-12-09 23:06:20.334137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.105 [2024-12-09 23:06:20.334497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.105 [2024-12-09 23:06:20.334511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:45.105 [2024-12-09 23:06:20.334575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:45.105 [2024-12-09 23:06:20.334588] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:45.105 [2024-12-09 23:06:20.334595] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:45.105 [2024-12-09 23:06:20.334606] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:45.105 BaseBdev1 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.105 23:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:46.038 "name": "raid_bdev1", 00:24:46.038 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:46.038 "strip_size_kb": 0, 00:24:46.038 "state": "online", 00:24:46.038 "raid_level": "raid1", 00:24:46.038 "superblock": true, 00:24:46.038 "num_base_bdevs": 4, 00:24:46.038 "num_base_bdevs_discovered": 2, 00:24:46.038 "num_base_bdevs_operational": 2, 00:24:46.038 "base_bdevs_list": [ 00:24:46.038 { 00:24:46.038 "name": null, 00:24:46.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.038 "is_configured": false, 00:24:46.038 "data_offset": 0, 00:24:46.038 "data_size": 63488 00:24:46.038 }, 00:24:46.038 { 00:24:46.038 "name": null, 00:24:46.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.038 "is_configured": false, 00:24:46.038 "data_offset": 2048, 00:24:46.038 "data_size": 63488 00:24:46.038 }, 00:24:46.038 { 00:24:46.038 "name": "BaseBdev3", 00:24:46.038 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:46.038 "is_configured": true, 00:24:46.038 "data_offset": 2048, 00:24:46.038 "data_size": 63488 00:24:46.038 }, 00:24:46.038 { 00:24:46.038 "name": "BaseBdev4", 00:24:46.038 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:46.038 "is_configured": true, 00:24:46.038 "data_offset": 2048, 00:24:46.038 "data_size": 63488 00:24:46.038 } 00:24:46.038 ] 00:24:46.038 }' 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:46.038 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.295 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:46.295 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:46.295 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:46.295 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:46.296 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:46.296 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.296 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.296 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.296 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:46.553 "name": "raid_bdev1", 00:24:46.553 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:46.553 "strip_size_kb": 0, 00:24:46.553 "state": "online", 00:24:46.553 "raid_level": "raid1", 00:24:46.553 "superblock": true, 00:24:46.553 "num_base_bdevs": 4, 00:24:46.553 "num_base_bdevs_discovered": 2, 00:24:46.553 "num_base_bdevs_operational": 2, 00:24:46.553 "base_bdevs_list": [ 00:24:46.553 { 00:24:46.553 "name": null, 00:24:46.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.553 "is_configured": false, 00:24:46.553 "data_offset": 0, 00:24:46.553 "data_size": 63488 00:24:46.553 }, 00:24:46.553 { 00:24:46.553 "name": null, 00:24:46.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.553 "is_configured": false, 00:24:46.553 "data_offset": 2048, 00:24:46.553 "data_size": 63488 00:24:46.553 }, 00:24:46.553 { 00:24:46.553 "name": "BaseBdev3", 00:24:46.553 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:46.553 "is_configured": true, 00:24:46.553 "data_offset": 2048, 00:24:46.553 "data_size": 63488 00:24:46.553 }, 00:24:46.553 { 00:24:46.553 "name": "BaseBdev4", 00:24:46.553 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:46.553 "is_configured": true, 00:24:46.553 "data_offset": 2048, 00:24:46.553 "data_size": 63488 00:24:46.553 } 00:24:46.553 ] 00:24:46.553 }' 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.553 [2024-12-09 23:06:21.770340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:46.553 [2024-12-09 23:06:21.770487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:46.553 [2024-12-09 23:06:21.770498] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:46.553 request: 00:24:46.553 { 00:24:46.553 "base_bdev": "BaseBdev1", 00:24:46.553 "raid_bdev": "raid_bdev1", 00:24:46.553 "method": "bdev_raid_add_base_bdev", 00:24:46.553 "req_id": 1 00:24:46.553 } 00:24:46.553 Got JSON-RPC error response 00:24:46.553 response: 00:24:46.553 { 00:24:46.553 "code": -22, 00:24:46.553 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:46.553 } 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:46.553 23:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.486 "name": "raid_bdev1", 00:24:47.486 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:47.486 "strip_size_kb": 0, 00:24:47.486 "state": "online", 00:24:47.486 "raid_level": "raid1", 00:24:47.486 "superblock": true, 00:24:47.486 "num_base_bdevs": 4, 00:24:47.486 "num_base_bdevs_discovered": 2, 00:24:47.486 "num_base_bdevs_operational": 2, 00:24:47.486 "base_bdevs_list": [ 00:24:47.486 { 00:24:47.486 "name": null, 00:24:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.486 "is_configured": false, 00:24:47.486 "data_offset": 0, 00:24:47.486 "data_size": 63488 00:24:47.486 }, 00:24:47.486 { 00:24:47.486 "name": null, 00:24:47.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.486 "is_configured": false, 00:24:47.486 "data_offset": 2048, 00:24:47.486 "data_size": 63488 00:24:47.486 }, 00:24:47.486 { 00:24:47.486 "name": "BaseBdev3", 00:24:47.486 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:47.486 "is_configured": true, 00:24:47.486 "data_offset": 2048, 00:24:47.486 "data_size": 63488 00:24:47.486 }, 00:24:47.486 { 00:24:47.486 "name": "BaseBdev4", 00:24:47.486 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:47.486 "is_configured": true, 00:24:47.486 "data_offset": 2048, 00:24:47.486 "data_size": 63488 00:24:47.486 } 00:24:47.486 ] 00:24:47.486 }' 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.486 23:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.062 "name": "raid_bdev1", 00:24:48.062 "uuid": "e6c0d1d2-f94a-4e39-863e-34186d856efe", 00:24:48.062 "strip_size_kb": 0, 00:24:48.062 "state": "online", 00:24:48.062 "raid_level": "raid1", 00:24:48.062 "superblock": true, 00:24:48.062 "num_base_bdevs": 4, 00:24:48.062 "num_base_bdevs_discovered": 2, 00:24:48.062 "num_base_bdevs_operational": 2, 00:24:48.062 "base_bdevs_list": [ 00:24:48.062 { 00:24:48.062 "name": null, 00:24:48.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.062 "is_configured": false, 00:24:48.062 "data_offset": 0, 00:24:48.062 "data_size": 63488 00:24:48.062 }, 00:24:48.062 { 00:24:48.062 "name": null, 00:24:48.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.062 "is_configured": false, 00:24:48.062 "data_offset": 2048, 00:24:48.062 "data_size": 63488 00:24:48.062 }, 00:24:48.062 { 00:24:48.062 "name": "BaseBdev3", 00:24:48.062 "uuid": "4b769226-35c7-515c-9a95-6aebac9a75b8", 00:24:48.062 "is_configured": true, 00:24:48.062 "data_offset": 2048, 00:24:48.062 "data_size": 63488 00:24:48.062 }, 00:24:48.062 { 00:24:48.062 "name": "BaseBdev4", 00:24:48.062 "uuid": "6dccdfaf-1573-5fa7-bc20-40494ffa0542", 00:24:48.062 "is_configured": true, 00:24:48.062 "data_offset": 2048, 00:24:48.062 "data_size": 63488 00:24:48.062 } 00:24:48.062 ] 00:24:48.062 }' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75883 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75883 ']' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75883 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75883 00:24:48.062 killing process with pid 75883 00:24:48.062 Received shutdown signal, test time was about 60.000000 seconds 00:24:48.062 00:24:48.062 Latency(us) 00:24:48.062 [2024-12-09T23:06:23.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.062 [2024-12-09T23:06:23.425Z] =================================================================================================================== 00:24:48.062 [2024-12-09T23:06:23.425Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75883' 00:24:48.062 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75883 00:24:48.063 [2024-12-09 23:06:23.243214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:48.063 [2024-12-09 23:06:23.243304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.063 23:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75883 00:24:48.063 [2024-12-09 23:06:23.243361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:48.063 [2024-12-09 23:06:23.243369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:48.320 [2024-12-09 23:06:23.484129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:48.886 00:24:48.886 real 0m22.057s 00:24:48.886 user 0m25.657s 00:24:48.886 sys 0m2.822s 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.886 ************************************ 00:24:48.886 END TEST raid_rebuild_test_sb 00:24:48.886 ************************************ 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.886 23:06:24 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:48.886 23:06:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:48.886 23:06:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.886 23:06:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:48.886 ************************************ 00:24:48.886 START TEST raid_rebuild_test_io 00:24:48.886 ************************************ 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:48.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76618 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76618 00:24:48.886 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76618 ']' 00:24:48.887 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.887 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.887 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.887 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.887 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:48.887 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.887 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:48.887 Zero copy mechanism will not be used. 00:24:48.887 [2024-12-09 23:06:24.171764] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:24:48.887 [2024-12-09 23:06:24.171890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76618 ] 00:24:49.144 [2024-12-09 23:06:24.318476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.144 [2024-12-09 23:06:24.404137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.402 [2024-12-09 23:06:24.516029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.402 [2024-12-09 23:06:24.516062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.660 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.660 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:24:49.660 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:49.660 23:06:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:49.660 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.660 23:06:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.660 BaseBdev1_malloc 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.660 [2024-12-09 23:06:25.004484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:49.660 [2024-12-09 23:06:25.004538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.660 [2024-12-09 23:06:25.004557] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:49.660 [2024-12-09 23:06:25.004567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.660 [2024-12-09 23:06:25.006401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.660 [2024-12-09 23:06:25.006436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:49.660 BaseBdev1 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.660 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.917 BaseBdev2_malloc 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.917 [2024-12-09 23:06:25.040435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:49.917 [2024-12-09 23:06:25.040490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.917 [2024-12-09 23:06:25.040509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:49.917 [2024-12-09 23:06:25.040518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.917 [2024-12-09 23:06:25.042304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.917 [2024-12-09 23:06:25.042331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:49.917 BaseBdev2 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.917 BaseBdev3_malloc 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.917 [2024-12-09 23:06:25.107924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:49.917 [2024-12-09 23:06:25.107974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.917 [2024-12-09 23:06:25.107992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:49.917 [2024-12-09 23:06:25.108002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.917 [2024-12-09 23:06:25.109775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.917 [2024-12-09 23:06:25.109810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:49.917 BaseBdev3 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.917 BaseBdev4_malloc 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.917 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.917 [2024-12-09 23:06:25.140016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:49.917 [2024-12-09 23:06:25.140067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.917 [2024-12-09 23:06:25.140081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:49.918 [2024-12-09 23:06:25.140090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.918 [2024-12-09 23:06:25.141859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.918 [2024-12-09 23:06:25.141893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:49.918 BaseBdev4 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.918 spare_malloc 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.918 spare_delay 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.918 [2024-12-09 23:06:25.179874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:49.918 [2024-12-09 23:06:25.179917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.918 [2024-12-09 23:06:25.179930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:49.918 [2024-12-09 23:06:25.179939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.918 [2024-12-09 23:06:25.181697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.918 [2024-12-09 23:06:25.181818] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:49.918 spare 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.918 [2024-12-09 23:06:25.187914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.918 [2024-12-09 23:06:25.189548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:49.918 [2024-12-09 23:06:25.189659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:49.918 [2024-12-09 23:06:25.189722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:49.918 [2024-12-09 23:06:25.189834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:49.918 [2024-12-09 23:06:25.189891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:49.918 [2024-12-09 23:06:25.190142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:49.918 [2024-12-09 23:06:25.190330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:49.918 [2024-12-09 23:06:25.190390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:49.918 [2024-12-09 23:06:25.190557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.918 "name": "raid_bdev1", 00:24:49.918 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:49.918 "strip_size_kb": 0, 00:24:49.918 "state": "online", 00:24:49.918 "raid_level": "raid1", 00:24:49.918 "superblock": false, 00:24:49.918 "num_base_bdevs": 4, 00:24:49.918 "num_base_bdevs_discovered": 4, 00:24:49.918 "num_base_bdevs_operational": 4, 00:24:49.918 "base_bdevs_list": [ 00:24:49.918 { 00:24:49.918 "name": "BaseBdev1", 00:24:49.918 "uuid": "90cad2f7-f6e4-5f83-acd4-8017853109fa", 00:24:49.918 "is_configured": true, 00:24:49.918 "data_offset": 0, 00:24:49.918 "data_size": 65536 00:24:49.918 }, 00:24:49.918 { 00:24:49.918 "name": "BaseBdev2", 00:24:49.918 "uuid": "8510c6ab-1027-53a4-93ef-f0802da1289e", 00:24:49.918 "is_configured": true, 00:24:49.918 "data_offset": 0, 00:24:49.918 "data_size": 65536 00:24:49.918 }, 00:24:49.918 { 00:24:49.918 "name": "BaseBdev3", 00:24:49.918 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:49.918 "is_configured": true, 00:24:49.918 "data_offset": 0, 00:24:49.918 "data_size": 65536 00:24:49.918 }, 00:24:49.918 { 00:24:49.918 "name": "BaseBdev4", 00:24:49.918 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:49.918 "is_configured": true, 00:24:49.918 "data_offset": 0, 00:24:49.918 "data_size": 65536 00:24:49.918 } 00:24:49.918 ] 00:24:49.918 }' 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.918 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.175 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:50.175 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:50.175 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.175 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.175 [2024-12-09 23:06:25.500296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.175 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.175 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.433 [2024-12-09 23:06:25.575975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.433 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.433 "name": "raid_bdev1", 00:24:50.433 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:50.433 "strip_size_kb": 0, 00:24:50.433 "state": "online", 00:24:50.433 "raid_level": "raid1", 00:24:50.433 "superblock": false, 00:24:50.433 "num_base_bdevs": 4, 00:24:50.433 "num_base_bdevs_discovered": 3, 00:24:50.433 "num_base_bdevs_operational": 3, 00:24:50.433 "base_bdevs_list": [ 00:24:50.433 { 00:24:50.433 "name": null, 00:24:50.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.433 "is_configured": false, 00:24:50.433 "data_offset": 0, 00:24:50.433 "data_size": 65536 00:24:50.433 }, 00:24:50.433 { 00:24:50.433 "name": "BaseBdev2", 00:24:50.433 "uuid": "8510c6ab-1027-53a4-93ef-f0802da1289e", 00:24:50.433 "is_configured": true, 00:24:50.433 "data_offset": 0, 00:24:50.433 "data_size": 65536 00:24:50.433 }, 00:24:50.433 { 00:24:50.433 "name": "BaseBdev3", 00:24:50.434 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:50.434 "is_configured": true, 00:24:50.434 "data_offset": 0, 00:24:50.434 "data_size": 65536 00:24:50.434 }, 00:24:50.434 { 00:24:50.434 "name": "BaseBdev4", 00:24:50.434 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:50.434 "is_configured": true, 00:24:50.434 "data_offset": 0, 00:24:50.434 "data_size": 65536 00:24:50.434 } 00:24:50.434 ] 00:24:50.434 }' 00:24:50.434 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.434 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.434 [2024-12-09 23:06:25.660527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:50.434 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:50.434 Zero copy mechanism will not be used. 00:24:50.434 Running I/O for 60 seconds... 00:24:50.691 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:50.691 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.691 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.691 [2024-12-09 23:06:25.939523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:50.691 23:06:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.691 23:06:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:50.692 [2024-12-09 23:06:25.999875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:50.692 [2024-12-09 23:06:26.001615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:50.949 [2024-12-09 23:06:26.121493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:50.949 [2024-12-09 23:06:26.122489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:51.207 [2024-12-09 23:06:26.325610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:51.207 [2024-12-09 23:06:26.326181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:51.463 195.00 IOPS, 585.00 MiB/s [2024-12-09T23:06:26.826Z] [2024-12-09 23:06:26.684320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:51.463 [2024-12-09 23:06:26.805838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:51.463 [2024-12-09 23:06:26.806072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.725 23:06:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.725 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.725 "name": "raid_bdev1", 00:24:51.725 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:51.725 "strip_size_kb": 0, 00:24:51.725 "state": "online", 00:24:51.725 "raid_level": "raid1", 00:24:51.725 "superblock": false, 00:24:51.725 "num_base_bdevs": 4, 00:24:51.725 "num_base_bdevs_discovered": 4, 00:24:51.725 "num_base_bdevs_operational": 4, 00:24:51.725 "process": { 00:24:51.725 "type": "rebuild", 00:24:51.725 "target": "spare", 00:24:51.725 "progress": { 00:24:51.725 "blocks": 12288, 00:24:51.725 "percent": 18 00:24:51.725 } 00:24:51.725 }, 00:24:51.725 "base_bdevs_list": [ 00:24:51.725 { 00:24:51.725 "name": "spare", 00:24:51.725 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:51.725 "is_configured": true, 00:24:51.725 "data_offset": 0, 00:24:51.725 "data_size": 65536 00:24:51.725 }, 00:24:51.725 { 00:24:51.725 "name": "BaseBdev2", 00:24:51.725 "uuid": "8510c6ab-1027-53a4-93ef-f0802da1289e", 00:24:51.725 "is_configured": true, 00:24:51.725 "data_offset": 0, 00:24:51.725 "data_size": 65536 00:24:51.725 }, 00:24:51.725 { 00:24:51.725 "name": "BaseBdev3", 00:24:51.725 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:51.725 "is_configured": true, 00:24:51.726 "data_offset": 0, 00:24:51.726 "data_size": 65536 00:24:51.726 }, 00:24:51.726 { 00:24:51.726 "name": "BaseBdev4", 00:24:51.726 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:51.726 "is_configured": true, 00:24:51.726 "data_offset": 0, 00:24:51.726 "data_size": 65536 00:24:51.726 } 00:24:51.726 ] 00:24:51.726 }' 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.726 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.983 [2024-12-09 23:06:27.085878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.983 [2024-12-09 23:06:27.162042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:51.983 [2024-12-09 23:06:27.205450] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:51.983 [2024-12-09 23:06:27.214213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.983 [2024-12-09 23:06:27.214252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.983 [2024-12-09 23:06:27.214263] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:51.983 [2024-12-09 23:06:27.229491] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.983 "name": "raid_bdev1", 00:24:51.983 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:51.983 "strip_size_kb": 0, 00:24:51.983 "state": "online", 00:24:51.983 "raid_level": "raid1", 00:24:51.983 "superblock": false, 00:24:51.983 "num_base_bdevs": 4, 00:24:51.983 "num_base_bdevs_discovered": 3, 00:24:51.983 "num_base_bdevs_operational": 3, 00:24:51.983 "base_bdevs_list": [ 00:24:51.983 { 00:24:51.983 "name": null, 00:24:51.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.983 "is_configured": false, 00:24:51.983 "data_offset": 0, 00:24:51.983 "data_size": 65536 00:24:51.983 }, 00:24:51.983 { 00:24:51.983 "name": "BaseBdev2", 00:24:51.983 "uuid": "8510c6ab-1027-53a4-93ef-f0802da1289e", 00:24:51.983 "is_configured": true, 00:24:51.983 "data_offset": 0, 00:24:51.983 "data_size": 65536 00:24:51.983 }, 00:24:51.983 { 00:24:51.983 "name": "BaseBdev3", 00:24:51.983 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:51.983 "is_configured": true, 00:24:51.983 "data_offset": 0, 00:24:51.983 "data_size": 65536 00:24:51.983 }, 00:24:51.983 { 00:24:51.983 "name": "BaseBdev4", 00:24:51.983 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:51.983 "is_configured": true, 00:24:51.983 "data_offset": 0, 00:24:51.983 "data_size": 65536 00:24:51.983 } 00:24:51.983 ] 00:24:51.983 }' 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.983 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.241 "name": "raid_bdev1", 00:24:52.241 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:52.241 "strip_size_kb": 0, 00:24:52.241 "state": "online", 00:24:52.241 "raid_level": "raid1", 00:24:52.241 "superblock": false, 00:24:52.241 "num_base_bdevs": 4, 00:24:52.241 "num_base_bdevs_discovered": 3, 00:24:52.241 "num_base_bdevs_operational": 3, 00:24:52.241 "base_bdevs_list": [ 00:24:52.241 { 00:24:52.241 "name": null, 00:24:52.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.241 "is_configured": false, 00:24:52.241 "data_offset": 0, 00:24:52.241 "data_size": 65536 00:24:52.241 }, 00:24:52.241 { 00:24:52.241 "name": "BaseBdev2", 00:24:52.241 "uuid": "8510c6ab-1027-53a4-93ef-f0802da1289e", 00:24:52.241 "is_configured": true, 00:24:52.241 "data_offset": 0, 00:24:52.241 "data_size": 65536 00:24:52.241 }, 00:24:52.241 { 00:24:52.241 "name": "BaseBdev3", 00:24:52.241 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:52.241 "is_configured": true, 00:24:52.241 "data_offset": 0, 00:24:52.241 "data_size": 65536 00:24:52.241 }, 00:24:52.241 { 00:24:52.241 "name": "BaseBdev4", 00:24:52.241 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:52.241 "is_configured": true, 00:24:52.241 "data_offset": 0, 00:24:52.241 "data_size": 65536 00:24:52.241 } 00:24:52.241 ] 00:24:52.241 }' 00:24:52.241 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.499 [2024-12-09 23:06:27.659864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.499 23:06:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:52.499 216.00 IOPS, 648.00 MiB/s [2024-12-09T23:06:27.862Z] [2024-12-09 23:06:27.703718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:52.499 [2024-12-09 23:06:27.705427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.499 [2024-12-09 23:06:27.834649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:52.499 [2024-12-09 23:06:27.835770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:52.756 [2024-12-09 23:06:28.054786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:52.756 [2024-12-09 23:06:28.055479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:53.347 [2024-12-09 23:06:28.523659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.347 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.606 173.67 IOPS, 521.00 MiB/s [2024-12-09T23:06:28.969Z] 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.606 "name": "raid_bdev1", 00:24:53.606 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:53.606 "strip_size_kb": 0, 00:24:53.606 "state": "online", 00:24:53.606 "raid_level": "raid1", 00:24:53.606 "superblock": false, 00:24:53.606 "num_base_bdevs": 4, 00:24:53.606 "num_base_bdevs_discovered": 4, 00:24:53.606 "num_base_bdevs_operational": 4, 00:24:53.606 "process": { 00:24:53.606 "type": "rebuild", 00:24:53.606 "target": "spare", 00:24:53.606 "progress": { 00:24:53.606 "blocks": 10240, 00:24:53.606 "percent": 15 00:24:53.606 } 00:24:53.606 }, 00:24:53.606 "base_bdevs_list": [ 00:24:53.606 { 00:24:53.606 "name": "spare", 00:24:53.606 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:53.606 "is_configured": true, 00:24:53.606 "data_offset": 0, 00:24:53.606 "data_size": 65536 00:24:53.606 }, 00:24:53.606 { 00:24:53.606 "name": "BaseBdev2", 00:24:53.606 "uuid": "8510c6ab-1027-53a4-93ef-f0802da1289e", 00:24:53.606 "is_configured": true, 00:24:53.606 "data_offset": 0, 00:24:53.606 "data_size": 65536 00:24:53.606 }, 00:24:53.606 { 00:24:53.606 "name": "BaseBdev3", 00:24:53.606 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:53.606 "is_configured": true, 00:24:53.606 "data_offset": 0, 00:24:53.606 "data_size": 65536 00:24:53.606 }, 00:24:53.606 { 00:24:53.606 "name": "BaseBdev4", 00:24:53.606 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:53.606 "is_configured": true, 00:24:53.606 "data_offset": 0, 00:24:53.606 "data_size": 65536 00:24:53.606 } 00:24:53.606 ] 00:24:53.606 }' 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.606 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.606 [2024-12-09 23:06:28.791638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:53.606 [2024-12-09 23:06:28.867595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:53.606 [2024-12-09 23:06:28.868746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:53.864 [2024-12-09 23:06:28.970086] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:53.864 [2024-12-09 23:06:28.970262] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.864 23:06:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.864 23:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.864 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.864 "name": "raid_bdev1", 00:24:53.864 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:53.865 "strip_size_kb": 0, 00:24:53.865 "state": "online", 00:24:53.865 "raid_level": "raid1", 00:24:53.865 "superblock": false, 00:24:53.865 "num_base_bdevs": 4, 00:24:53.865 "num_base_bdevs_discovered": 3, 00:24:53.865 "num_base_bdevs_operational": 3, 00:24:53.865 "process": { 00:24:53.865 "type": "rebuild", 00:24:53.865 "target": "spare", 00:24:53.865 "progress": { 00:24:53.865 "blocks": 14336, 00:24:53.865 "percent": 21 00:24:53.865 } 00:24:53.865 }, 00:24:53.865 "base_bdevs_list": [ 00:24:53.865 { 00:24:53.865 "name": "spare", 00:24:53.865 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:53.865 "is_configured": true, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 }, 00:24:53.865 { 00:24:53.865 "name": null, 00:24:53.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.865 "is_configured": false, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 }, 00:24:53.865 { 00:24:53.865 "name": "BaseBdev3", 00:24:53.865 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:53.865 "is_configured": true, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 }, 00:24:53.865 { 00:24:53.865 "name": "BaseBdev4", 00:24:53.865 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:53.865 "is_configured": true, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 } 00:24:53.865 ] 00:24:53.865 }' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.865 "name": "raid_bdev1", 00:24:53.865 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:53.865 "strip_size_kb": 0, 00:24:53.865 "state": "online", 00:24:53.865 "raid_level": "raid1", 00:24:53.865 "superblock": false, 00:24:53.865 "num_base_bdevs": 4, 00:24:53.865 "num_base_bdevs_discovered": 3, 00:24:53.865 "num_base_bdevs_operational": 3, 00:24:53.865 "process": { 00:24:53.865 "type": "rebuild", 00:24:53.865 "target": "spare", 00:24:53.865 "progress": { 00:24:53.865 "blocks": 14336, 00:24:53.865 "percent": 21 00:24:53.865 } 00:24:53.865 }, 00:24:53.865 "base_bdevs_list": [ 00:24:53.865 { 00:24:53.865 "name": "spare", 00:24:53.865 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:53.865 "is_configured": true, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 }, 00:24:53.865 { 00:24:53.865 "name": null, 00:24:53.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.865 "is_configured": false, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 }, 00:24:53.865 { 00:24:53.865 "name": "BaseBdev3", 00:24:53.865 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:53.865 "is_configured": true, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 }, 00:24:53.865 { 00:24:53.865 "name": "BaseBdev4", 00:24:53.865 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:53.865 "is_configured": true, 00:24:53.865 "data_offset": 0, 00:24:53.865 "data_size": 65536 00:24:53.865 } 00:24:53.865 ] 00:24:53.865 }' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.865 23:06:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:54.129 [2024-12-09 23:06:29.411106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:54.390 145.50 IOPS, 436.50 MiB/s [2024-12-09T23:06:29.753Z] [2024-12-09 23:06:29.733422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:54.648 [2024-12-09 23:06:29.957846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:54.907 "name": "raid_bdev1", 00:24:54.907 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:54.907 "strip_size_kb": 0, 00:24:54.907 "state": "online", 00:24:54.907 "raid_level": "raid1", 00:24:54.907 "superblock": false, 00:24:54.907 "num_base_bdevs": 4, 00:24:54.907 "num_base_bdevs_discovered": 3, 00:24:54.907 "num_base_bdevs_operational": 3, 00:24:54.907 "process": { 00:24:54.907 "type": "rebuild", 00:24:54.907 "target": "spare", 00:24:54.907 "progress": { 00:24:54.907 "blocks": 30720, 00:24:54.907 "percent": 46 00:24:54.907 } 00:24:54.907 }, 00:24:54.907 "base_bdevs_list": [ 00:24:54.907 { 00:24:54.907 "name": "spare", 00:24:54.907 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:54.907 "is_configured": true, 00:24:54.907 "data_offset": 0, 00:24:54.907 "data_size": 65536 00:24:54.907 }, 00:24:54.907 { 00:24:54.907 "name": null, 00:24:54.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.907 "is_configured": false, 00:24:54.907 "data_offset": 0, 00:24:54.907 "data_size": 65536 00:24:54.907 }, 00:24:54.907 { 00:24:54.907 "name": "BaseBdev3", 00:24:54.907 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:54.907 "is_configured": true, 00:24:54.907 "data_offset": 0, 00:24:54.907 "data_size": 65536 00:24:54.907 }, 00:24:54.907 { 00:24:54.907 "name": "BaseBdev4", 00:24:54.907 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:54.907 "is_configured": true, 00:24:54.907 "data_offset": 0, 00:24:54.907 "data_size": 65536 00:24:54.907 } 00:24:54.907 ] 00:24:54.907 }' 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.907 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:55.164 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.164 23:06:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:55.164 [2024-12-09 23:06:30.291331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:55.164 [2024-12-09 23:06:30.291553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:55.164 [2024-12-09 23:06:30.511749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:55.422 127.40 IOPS, 382.20 MiB/s [2024-12-09T23:06:30.785Z] [2024-12-09 23:06:30.725615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:55.988 [2024-12-09 23:06:31.155644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:55.988 "name": "raid_bdev1", 00:24:55.988 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:55.988 "strip_size_kb": 0, 00:24:55.988 "state": "online", 00:24:55.988 "raid_level": "raid1", 00:24:55.988 "superblock": false, 00:24:55.988 "num_base_bdevs": 4, 00:24:55.988 "num_base_bdevs_discovered": 3, 00:24:55.988 "num_base_bdevs_operational": 3, 00:24:55.988 "process": { 00:24:55.988 "type": "rebuild", 00:24:55.988 "target": "spare", 00:24:55.988 "progress": { 00:24:55.988 "blocks": 49152, 00:24:55.988 "percent": 75 00:24:55.988 } 00:24:55.988 }, 00:24:55.988 "base_bdevs_list": [ 00:24:55.988 { 00:24:55.988 "name": "spare", 00:24:55.988 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:55.988 "is_configured": true, 00:24:55.988 "data_offset": 0, 00:24:55.988 "data_size": 65536 00:24:55.988 }, 00:24:55.988 { 00:24:55.988 "name": null, 00:24:55.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.988 "is_configured": false, 00:24:55.988 "data_offset": 0, 00:24:55.988 "data_size": 65536 00:24:55.988 }, 00:24:55.988 { 00:24:55.988 "name": "BaseBdev3", 00:24:55.988 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:55.988 "is_configured": true, 00:24:55.988 "data_offset": 0, 00:24:55.988 "data_size": 65536 00:24:55.988 }, 00:24:55.988 { 00:24:55.988 "name": "BaseBdev4", 00:24:55.988 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:55.988 "is_configured": true, 00:24:55.988 "data_offset": 0, 00:24:55.988 "data_size": 65536 00:24:55.988 } 00:24:55.988 ] 00:24:55.988 }' 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:55.988 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:56.245 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.245 23:06:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:56.245 [2024-12-09 23:06:31.380680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:56.245 [2024-12-09 23:06:31.594617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:57.068 111.67 IOPS, 335.00 MiB/s [2024-12-09T23:06:32.431Z] [2024-12-09 23:06:32.350695] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:57.068 "name": "raid_bdev1", 00:24:57.068 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:57.068 "strip_size_kb": 0, 00:24:57.068 "state": "online", 00:24:57.068 "raid_level": "raid1", 00:24:57.068 "superblock": false, 00:24:57.068 "num_base_bdevs": 4, 00:24:57.068 "num_base_bdevs_discovered": 3, 00:24:57.068 "num_base_bdevs_operational": 3, 00:24:57.068 "process": { 00:24:57.068 "type": "rebuild", 00:24:57.068 "target": "spare", 00:24:57.068 "progress": { 00:24:57.068 "blocks": 65536, 00:24:57.068 "percent": 100 00:24:57.068 } 00:24:57.068 }, 00:24:57.068 "base_bdevs_list": [ 00:24:57.068 { 00:24:57.068 "name": "spare", 00:24:57.068 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:57.068 "is_configured": true, 00:24:57.068 "data_offset": 0, 00:24:57.068 "data_size": 65536 00:24:57.068 }, 00:24:57.068 { 00:24:57.068 "name": null, 00:24:57.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.068 "is_configured": false, 00:24:57.068 "data_offset": 0, 00:24:57.068 "data_size": 65536 00:24:57.068 }, 00:24:57.068 { 00:24:57.068 "name": "BaseBdev3", 00:24:57.068 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:57.068 "is_configured": true, 00:24:57.068 "data_offset": 0, 00:24:57.068 "data_size": 65536 00:24:57.068 }, 00:24:57.068 { 00:24:57.068 "name": "BaseBdev4", 00:24:57.068 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:57.068 "is_configured": true, 00:24:57.068 "data_offset": 0, 00:24:57.068 "data_size": 65536 00:24:57.068 } 00:24:57.068 ] 00:24:57.068 }' 00:24:57.068 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:57.324 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.324 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:57.324 [2024-12-09 23:06:32.450728] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:57.324 [2024-12-09 23:06:32.452422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.324 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.324 23:06:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:58.143 100.86 IOPS, 302.57 MiB/s [2024-12-09T23:06:33.506Z] 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.143 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.420 "name": "raid_bdev1", 00:24:58.420 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:58.420 "strip_size_kb": 0, 00:24:58.420 "state": "online", 00:24:58.420 "raid_level": "raid1", 00:24:58.420 "superblock": false, 00:24:58.420 "num_base_bdevs": 4, 00:24:58.420 "num_base_bdevs_discovered": 3, 00:24:58.420 "num_base_bdevs_operational": 3, 00:24:58.420 "base_bdevs_list": [ 00:24:58.420 { 00:24:58.420 "name": "spare", 00:24:58.420 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:58.420 "is_configured": true, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 }, 00:24:58.420 { 00:24:58.420 "name": null, 00:24:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.420 "is_configured": false, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 }, 00:24:58.420 { 00:24:58.420 "name": "BaseBdev3", 00:24:58.420 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:58.420 "is_configured": true, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 }, 00:24:58.420 { 00:24:58.420 "name": "BaseBdev4", 00:24:58.420 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:58.420 "is_configured": true, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 } 00:24:58.420 ] 00:24:58.420 }' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.420 "name": "raid_bdev1", 00:24:58.420 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:58.420 "strip_size_kb": 0, 00:24:58.420 "state": "online", 00:24:58.420 "raid_level": "raid1", 00:24:58.420 "superblock": false, 00:24:58.420 "num_base_bdevs": 4, 00:24:58.420 "num_base_bdevs_discovered": 3, 00:24:58.420 "num_base_bdevs_operational": 3, 00:24:58.420 "base_bdevs_list": [ 00:24:58.420 { 00:24:58.420 "name": "spare", 00:24:58.420 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:58.420 "is_configured": true, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 }, 00:24:58.420 { 00:24:58.420 "name": null, 00:24:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.420 "is_configured": false, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 }, 00:24:58.420 { 00:24:58.420 "name": "BaseBdev3", 00:24:58.420 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:58.420 "is_configured": true, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 }, 00:24:58.420 { 00:24:58.420 "name": "BaseBdev4", 00:24:58.420 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:58.420 "is_configured": true, 00:24:58.420 "data_offset": 0, 00:24:58.420 "data_size": 65536 00:24:58.420 } 00:24:58.420 ] 00:24:58.420 }' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.420 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.420 92.25 IOPS, 276.75 MiB/s [2024-12-09T23:06:33.783Z] 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.420 "name": "raid_bdev1", 00:24:58.420 "uuid": "c4ca3575-c3b9-4dc9-87c7-9dcf44b82304", 00:24:58.420 "strip_size_kb": 0, 00:24:58.420 "state": "online", 00:24:58.420 "raid_level": "raid1", 00:24:58.420 "superblock": false, 00:24:58.420 "num_base_bdevs": 4, 00:24:58.420 "num_base_bdevs_discovered": 3, 00:24:58.420 "num_base_bdevs_operational": 3, 00:24:58.420 "base_bdevs_list": [ 00:24:58.420 { 00:24:58.420 "name": "spare", 00:24:58.420 "uuid": "4c7bef9b-b373-5c0c-8ad3-a7a9f9fc73b4", 00:24:58.420 "is_configured": true, 00:24:58.421 "data_offset": 0, 00:24:58.421 "data_size": 65536 00:24:58.421 }, 00:24:58.421 { 00:24:58.421 "name": null, 00:24:58.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.421 "is_configured": false, 00:24:58.421 "data_offset": 0, 00:24:58.421 "data_size": 65536 00:24:58.421 }, 00:24:58.421 { 00:24:58.421 "name": "BaseBdev3", 00:24:58.421 "uuid": "0ee51259-a2a5-5862-bf06-6def624305fe", 00:24:58.421 "is_configured": true, 00:24:58.421 "data_offset": 0, 00:24:58.421 "data_size": 65536 00:24:58.421 }, 00:24:58.421 { 00:24:58.421 "name": "BaseBdev4", 00:24:58.421 "uuid": "7dfeb589-0d77-5369-96d8-e58b63393dcf", 00:24:58.421 "is_configured": true, 00:24:58.421 "data_offset": 0, 00:24:58.421 "data_size": 65536 00:24:58.421 } 00:24:58.421 ] 00:24:58.421 }' 00:24:58.421 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.421 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.720 23:06:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:58.720 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.720 23:06:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.720 [2024-12-09 23:06:33.987361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:58.720 [2024-12-09 23:06:33.987388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:58.720 00:24:58.720 Latency(us) 00:24:58.720 [2024-12-09T23:06:34.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.720 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:58.720 raid_bdev1 : 8.40 89.35 268.06 0.00 0.00 16370.17 239.46 112923.57 00:24:58.720 [2024-12-09T23:06:34.083Z] =================================================================================================================== 00:24:58.720 [2024-12-09T23:06:34.083Z] Total : 89.35 268.06 0.00 0.00 16370.17 239.46 112923.57 00:24:58.720 [2024-12-09 23:06:34.080130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.978 { 00:24:58.978 "results": [ 00:24:58.978 { 00:24:58.978 "job": "raid_bdev1", 00:24:58.978 "core_mask": "0x1", 00:24:58.978 "workload": "randrw", 00:24:58.978 "percentage": 50, 00:24:58.978 "status": "finished", 00:24:58.978 "queue_depth": 2, 00:24:58.978 "io_size": 3145728, 00:24:58.978 "runtime": 8.404926, 00:24:58.978 "iops": 89.3523631261001, 00:24:58.978 "mibps": 268.0570893783003, 00:24:58.978 "io_failed": 0, 00:24:58.978 "io_timeout": 0, 00:24:58.978 "avg_latency_us": 16370.171857011163, 00:24:58.978 "min_latency_us": 239.45846153846153, 00:24:58.978 "max_latency_us": 112923.56923076924 00:24:58.978 } 00:24:58.978 ], 00:24:58.978 "core_count": 1 00:24:58.978 } 00:24:58.978 [2024-12-09 23:06:34.080349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.978 [2024-12-09 23:06:34.080448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:58.978 [2024-12-09 23:06:34.080467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:58.978 /dev/nbd0 00:24:58.978 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:59.237 1+0 records in 00:24:59.237 1+0 records out 00:24:59.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209009 s, 19.6 MB/s 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:59.237 /dev/nbd1 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:59.237 1+0 records in 00:24:59.237 1+0 records out 00:24:59.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283589 s, 14.4 MB/s 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:59.237 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:59.495 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:59.753 23:06:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:59.753 /dev/nbd1 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.011 1+0 records in 00:25:00.011 1+0 records out 00:25:00.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263595 s, 15.5 MB/s 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:00.011 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76618 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76618 ']' 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76618 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:25:00.269 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.270 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76618 00:25:00.270 killing process with pid 76618 00:25:00.270 Received shutdown signal, test time was about 9.957991 seconds 00:25:00.270 00:25:00.270 Latency(us) 00:25:00.270 [2024-12-09T23:06:35.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.270 [2024-12-09T23:06:35.633Z] =================================================================================================================== 00:25:00.270 [2024-12-09T23:06:35.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.270 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.270 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.270 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76618' 00:25:00.270 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76618 00:25:00.270 [2024-12-09 23:06:35.620362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.270 23:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76618 00:25:00.528 [2024-12-09 23:06:35.829728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.092 23:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:25:01.092 00:25:01.092 real 0m12.340s 00:25:01.092 user 0m15.148s 00:25:01.092 sys 0m1.278s 00:25:01.092 23:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.092 ************************************ 00:25:01.092 END TEST raid_rebuild_test_io 00:25:01.092 ************************************ 00:25:01.092 23:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.350 23:06:36 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:25:01.350 23:06:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:01.350 23:06:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.350 23:06:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:01.350 ************************************ 00:25:01.350 START TEST raid_rebuild_test_sb_io 00:25:01.350 ************************************ 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77027 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77027 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77027 ']' 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.350 23:06:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.350 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:01.350 Zero copy mechanism will not be used. 00:25:01.350 [2024-12-09 23:06:36.542535] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:25:01.350 [2024-12-09 23:06:36.542637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77027 ] 00:25:01.350 [2024-12-09 23:06:36.692053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.608 [2024-12-09 23:06:36.780053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.608 [2024-12-09 23:06:36.892011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.608 [2024-12-09 23:06:36.892037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.175 BaseBdev1_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.175 [2024-12-09 23:06:37.436502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:02.175 [2024-12-09 23:06:37.436559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.175 [2024-12-09 23:06:37.436577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:02.175 [2024-12-09 23:06:37.436587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.175 [2024-12-09 23:06:37.438382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.175 [2024-12-09 23:06:37.438511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:02.175 BaseBdev1 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.175 BaseBdev2_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.175 [2024-12-09 23:06:37.468263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:02.175 [2024-12-09 23:06:37.468312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.175 [2024-12-09 23:06:37.468330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:02.175 [2024-12-09 23:06:37.468338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.175 [2024-12-09 23:06:37.470088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.175 [2024-12-09 23:06:37.470127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:02.175 BaseBdev2 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.175 BaseBdev3_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.175 [2024-12-09 23:06:37.515269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:02.175 [2024-12-09 23:06:37.515324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.175 [2024-12-09 23:06:37.515348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:02.175 [2024-12-09 23:06:37.515360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.175 [2024-12-09 23:06:37.517255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.175 [2024-12-09 23:06:37.517287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:02.175 BaseBdev3 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.175 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.434 BaseBdev4_malloc 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.434 [2024-12-09 23:06:37.547373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:02.434 [2024-12-09 23:06:37.547420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.434 [2024-12-09 23:06:37.547435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:02.434 [2024-12-09 23:06:37.547443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.434 [2024-12-09 23:06:37.549211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.434 [2024-12-09 23:06:37.549253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:02.434 BaseBdev4 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.434 spare_malloc 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.434 spare_delay 00:25:02.434 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.435 [2024-12-09 23:06:37.595770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:02.435 [2024-12-09 23:06:37.595827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.435 [2024-12-09 23:06:37.595843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:02.435 [2024-12-09 23:06:37.595852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.435 [2024-12-09 23:06:37.597804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.435 [2024-12-09 23:06:37.597839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:02.435 spare 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.435 [2024-12-09 23:06:37.603791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:02.435 [2024-12-09 23:06:37.605444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:02.435 [2024-12-09 23:06:37.605556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:02.435 [2024-12-09 23:06:37.605656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:02.435 [2024-12-09 23:06:37.605830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:02.435 [2024-12-09 23:06:37.605889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:02.435 [2024-12-09 23:06:37.606139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:02.435 [2024-12-09 23:06:37.606323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:02.435 [2024-12-09 23:06:37.606381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:02.435 [2024-12-09 23:06:37.606589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.435 "name": "raid_bdev1", 00:25:02.435 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:02.435 "strip_size_kb": 0, 00:25:02.435 "state": "online", 00:25:02.435 "raid_level": "raid1", 00:25:02.435 "superblock": true, 00:25:02.435 "num_base_bdevs": 4, 00:25:02.435 "num_base_bdevs_discovered": 4, 00:25:02.435 "num_base_bdevs_operational": 4, 00:25:02.435 "base_bdevs_list": [ 00:25:02.435 { 00:25:02.435 "name": "BaseBdev1", 00:25:02.435 "uuid": "843d4e6c-2a89-532b-aec1-10f1a9293238", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 }, 00:25:02.435 { 00:25:02.435 "name": "BaseBdev2", 00:25:02.435 "uuid": "88fc9aaf-3102-53f1-a32a-44bf61030bea", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 }, 00:25:02.435 { 00:25:02.435 "name": "BaseBdev3", 00:25:02.435 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 }, 00:25:02.435 { 00:25:02.435 "name": "BaseBdev4", 00:25:02.435 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:02.435 "is_configured": true, 00:25:02.435 "data_offset": 2048, 00:25:02.435 "data_size": 63488 00:25:02.435 } 00:25:02.435 ] 00:25:02.435 }' 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.435 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.694 [2024-12-09 23:06:37.920143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:02.694 [2024-12-09 23:06:37.971828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.694 23:06:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.694 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.694 "name": "raid_bdev1", 00:25:02.694 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:02.694 "strip_size_kb": 0, 00:25:02.694 "state": "online", 00:25:02.694 "raid_level": "raid1", 00:25:02.694 "superblock": true, 00:25:02.694 "num_base_bdevs": 4, 00:25:02.694 "num_base_bdevs_discovered": 3, 00:25:02.694 "num_base_bdevs_operational": 3, 00:25:02.694 "base_bdevs_list": [ 00:25:02.694 { 00:25:02.694 "name": null, 00:25:02.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.694 "is_configured": false, 00:25:02.694 "data_offset": 0, 00:25:02.694 "data_size": 63488 00:25:02.694 }, 00:25:02.694 { 00:25:02.694 "name": "BaseBdev2", 00:25:02.694 "uuid": "88fc9aaf-3102-53f1-a32a-44bf61030bea", 00:25:02.694 "is_configured": true, 00:25:02.694 "data_offset": 2048, 00:25:02.694 "data_size": 63488 00:25:02.694 }, 00:25:02.694 { 00:25:02.694 "name": "BaseBdev3", 00:25:02.694 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:02.694 "is_configured": true, 00:25:02.694 "data_offset": 2048, 00:25:02.694 "data_size": 63488 00:25:02.694 }, 00:25:02.694 { 00:25:02.694 "name": "BaseBdev4", 00:25:02.694 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:02.694 "is_configured": true, 00:25:02.694 "data_offset": 2048, 00:25:02.694 "data_size": 63488 00:25:02.694 } 00:25:02.694 ] 00:25:02.694 }' 00:25:02.694 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.694 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.952 [2024-12-09 23:06:38.056434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:02.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:02.952 Zero copy mechanism will not be used. 00:25:02.952 Running I/O for 60 seconds... 00:25:02.952 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:02.952 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.952 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.952 [2024-12-09 23:06:38.289282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:03.209 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.209 23:06:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:03.209 [2024-12-09 23:06:38.331939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:03.209 [2024-12-09 23:06:38.333638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:03.209 [2024-12-09 23:06:38.441202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:03.210 [2024-12-09 23:06:38.441731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:03.210 [2024-12-09 23:06:38.562903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:03.210 [2024-12-09 23:06:38.563152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:03.775 [2024-12-09 23:06:38.935908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:04.033 167.00 IOPS, 501.00 MiB/s [2024-12-09T23:06:39.396Z] [2024-12-09 23:06:39.270331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:04.033 "name": "raid_bdev1", 00:25:04.033 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:04.033 "strip_size_kb": 0, 00:25:04.033 "state": "online", 00:25:04.033 "raid_level": "raid1", 00:25:04.033 "superblock": true, 00:25:04.033 "num_base_bdevs": 4, 00:25:04.033 "num_base_bdevs_discovered": 4, 00:25:04.033 "num_base_bdevs_operational": 4, 00:25:04.033 "process": { 00:25:04.033 "type": "rebuild", 00:25:04.033 "target": "spare", 00:25:04.033 "progress": { 00:25:04.033 "blocks": 14336, 00:25:04.033 "percent": 22 00:25:04.033 } 00:25:04.033 }, 00:25:04.033 "base_bdevs_list": [ 00:25:04.033 { 00:25:04.033 "name": "spare", 00:25:04.033 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:04.033 "is_configured": true, 00:25:04.033 "data_offset": 2048, 00:25:04.033 "data_size": 63488 00:25:04.033 }, 00:25:04.033 { 00:25:04.033 "name": "BaseBdev2", 00:25:04.033 "uuid": "88fc9aaf-3102-53f1-a32a-44bf61030bea", 00:25:04.033 "is_configured": true, 00:25:04.033 "data_offset": 2048, 00:25:04.033 "data_size": 63488 00:25:04.033 }, 00:25:04.033 { 00:25:04.033 "name": "BaseBdev3", 00:25:04.033 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:04.033 "is_configured": true, 00:25:04.033 "data_offset": 2048, 00:25:04.033 "data_size": 63488 00:25:04.033 }, 00:25:04.033 { 00:25:04.033 "name": "BaseBdev4", 00:25:04.033 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:04.033 "is_configured": true, 00:25:04.033 "data_offset": 2048, 00:25:04.033 "data_size": 63488 00:25:04.033 } 00:25:04.033 ] 00:25:04.033 }' 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:04.033 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.292 [2024-12-09 23:06:39.432132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.292 [2024-12-09 23:06:39.605283] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:04.292 [2024-12-09 23:06:39.614227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.292 [2024-12-09 23:06:39.614270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:04.292 [2024-12-09 23:06:39.614285] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:04.292 [2024-12-09 23:06:39.634124] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.292 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.550 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.550 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.550 "name": "raid_bdev1", 00:25:04.550 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:04.550 "strip_size_kb": 0, 00:25:04.550 "state": "online", 00:25:04.550 "raid_level": "raid1", 00:25:04.550 "superblock": true, 00:25:04.550 "num_base_bdevs": 4, 00:25:04.550 "num_base_bdevs_discovered": 3, 00:25:04.550 "num_base_bdevs_operational": 3, 00:25:04.550 "base_bdevs_list": [ 00:25:04.550 { 00:25:04.550 "name": null, 00:25:04.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.550 "is_configured": false, 00:25:04.550 "data_offset": 0, 00:25:04.550 "data_size": 63488 00:25:04.550 }, 00:25:04.550 { 00:25:04.550 "name": "BaseBdev2", 00:25:04.550 "uuid": "88fc9aaf-3102-53f1-a32a-44bf61030bea", 00:25:04.550 "is_configured": true, 00:25:04.550 "data_offset": 2048, 00:25:04.550 "data_size": 63488 00:25:04.550 }, 00:25:04.550 { 00:25:04.550 "name": "BaseBdev3", 00:25:04.550 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:04.550 "is_configured": true, 00:25:04.550 "data_offset": 2048, 00:25:04.550 "data_size": 63488 00:25:04.550 }, 00:25:04.550 { 00:25:04.550 "name": "BaseBdev4", 00:25:04.550 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:04.550 "is_configured": true, 00:25:04.550 "data_offset": 2048, 00:25:04.550 "data_size": 63488 00:25:04.550 } 00:25:04.550 ] 00:25:04.550 }' 00:25:04.550 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.550 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.807 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:04.807 "name": "raid_bdev1", 00:25:04.807 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:04.807 "strip_size_kb": 0, 00:25:04.807 "state": "online", 00:25:04.807 "raid_level": "raid1", 00:25:04.807 "superblock": true, 00:25:04.807 "num_base_bdevs": 4, 00:25:04.807 "num_base_bdevs_discovered": 3, 00:25:04.807 "num_base_bdevs_operational": 3, 00:25:04.807 "base_bdevs_list": [ 00:25:04.807 { 00:25:04.807 "name": null, 00:25:04.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.807 "is_configured": false, 00:25:04.807 "data_offset": 0, 00:25:04.807 "data_size": 63488 00:25:04.807 }, 00:25:04.807 { 00:25:04.807 "name": "BaseBdev2", 00:25:04.807 "uuid": "88fc9aaf-3102-53f1-a32a-44bf61030bea", 00:25:04.807 "is_configured": true, 00:25:04.807 "data_offset": 2048, 00:25:04.808 "data_size": 63488 00:25:04.808 }, 00:25:04.808 { 00:25:04.808 "name": "BaseBdev3", 00:25:04.808 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:04.808 "is_configured": true, 00:25:04.808 "data_offset": 2048, 00:25:04.808 "data_size": 63488 00:25:04.808 }, 00:25:04.808 { 00:25:04.808 "name": "BaseBdev4", 00:25:04.808 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:04.808 "is_configured": true, 00:25:04.808 "data_offset": 2048, 00:25:04.808 "data_size": 63488 00:25:04.808 } 00:25:04.808 ] 00:25:04.808 }' 00:25:04.808 23:06:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:04.808 [2024-12-09 23:06:40.060605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:04.808 163.50 IOPS, 490.50 MiB/s [2024-12-09T23:06:40.171Z] 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.808 23:06:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:04.808 [2024-12-09 23:06:40.110169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:04.808 [2024-12-09 23:06:40.111834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:05.073 [2024-12-09 23:06:40.219144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:05.073 [2024-12-09 23:06:40.219546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:05.073 [2024-12-09 23:06:40.346603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:05.073 [2024-12-09 23:06:40.346965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:05.333 [2024-12-09 23:06:40.673547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:05.590 [2024-12-09 23:06:40.800011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:05.847 [2024-12-09 23:06:41.022277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:05.847 [2024-12-09 23:06:41.023394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:05.847 167.67 IOPS, 503.00 MiB/s [2024-12-09T23:06:41.210Z] 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.847 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:05.848 "name": "raid_bdev1", 00:25:05.848 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:05.848 "strip_size_kb": 0, 00:25:05.848 "state": "online", 00:25:05.848 "raid_level": "raid1", 00:25:05.848 "superblock": true, 00:25:05.848 "num_base_bdevs": 4, 00:25:05.848 "num_base_bdevs_discovered": 4, 00:25:05.848 "num_base_bdevs_operational": 4, 00:25:05.848 "process": { 00:25:05.848 "type": "rebuild", 00:25:05.848 "target": "spare", 00:25:05.848 "progress": { 00:25:05.848 "blocks": 14336, 00:25:05.848 "percent": 22 00:25:05.848 } 00:25:05.848 }, 00:25:05.848 "base_bdevs_list": [ 00:25:05.848 { 00:25:05.848 "name": "spare", 00:25:05.848 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:05.848 "is_configured": true, 00:25:05.848 "data_offset": 2048, 00:25:05.848 "data_size": 63488 00:25:05.848 }, 00:25:05.848 { 00:25:05.848 "name": "BaseBdev2", 00:25:05.848 "uuid": "88fc9aaf-3102-53f1-a32a-44bf61030bea", 00:25:05.848 "is_configured": true, 00:25:05.848 "data_offset": 2048, 00:25:05.848 "data_size": 63488 00:25:05.848 }, 00:25:05.848 { 00:25:05.848 "name": "BaseBdev3", 00:25:05.848 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:05.848 "is_configured": true, 00:25:05.848 "data_offset": 2048, 00:25:05.848 "data_size": 63488 00:25:05.848 }, 00:25:05.848 { 00:25:05.848 "name": "BaseBdev4", 00:25:05.848 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:05.848 "is_configured": true, 00:25:05.848 "data_offset": 2048, 00:25:05.848 "data_size": 63488 00:25:05.848 } 00:25:05.848 ] 00:25:05.848 }' 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:05.848 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.848 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 [2024-12-09 23:06:41.206427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:06.105 [2024-12-09 23:06:41.245960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:06.105 [2024-12-09 23:06:41.366448] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:25:06.105 [2024-12-09 23:06:41.366614] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:25:06.105 [2024-12-09 23:06:41.367651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:06.105 "name": "raid_bdev1", 00:25:06.105 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:06.105 "strip_size_kb": 0, 00:25:06.105 "state": "online", 00:25:06.105 "raid_level": "raid1", 00:25:06.105 "superblock": true, 00:25:06.105 "num_base_bdevs": 4, 00:25:06.105 "num_base_bdevs_discovered": 3, 00:25:06.105 "num_base_bdevs_operational": 3, 00:25:06.105 "process": { 00:25:06.105 "type": "rebuild", 00:25:06.105 "target": "spare", 00:25:06.105 "progress": { 00:25:06.105 "blocks": 16384, 00:25:06.105 "percent": 25 00:25:06.105 } 00:25:06.105 }, 00:25:06.105 "base_bdevs_list": [ 00:25:06.105 { 00:25:06.105 "name": "spare", 00:25:06.105 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:06.105 "is_configured": true, 00:25:06.105 "data_offset": 2048, 00:25:06.105 "data_size": 63488 00:25:06.105 }, 00:25:06.105 { 00:25:06.105 "name": null, 00:25:06.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.105 "is_configured": false, 00:25:06.105 "data_offset": 0, 00:25:06.105 "data_size": 63488 00:25:06.105 }, 00:25:06.105 { 00:25:06.105 "name": "BaseBdev3", 00:25:06.105 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:06.105 "is_configured": true, 00:25:06.105 "data_offset": 2048, 00:25:06.105 "data_size": 63488 00:25:06.105 }, 00:25:06.105 { 00:25:06.105 "name": "BaseBdev4", 00:25:06.105 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:06.105 "is_configured": true, 00:25:06.105 "data_offset": 2048, 00:25:06.105 "data_size": 63488 00:25:06.105 } 00:25:06.105 ] 00:25:06.105 }' 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.105 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:06.363 "name": "raid_bdev1", 00:25:06.363 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:06.363 "strip_size_kb": 0, 00:25:06.363 "state": "online", 00:25:06.363 "raid_level": "raid1", 00:25:06.363 "superblock": true, 00:25:06.363 "num_base_bdevs": 4, 00:25:06.363 "num_base_bdevs_discovered": 3, 00:25:06.363 "num_base_bdevs_operational": 3, 00:25:06.363 "process": { 00:25:06.363 "type": "rebuild", 00:25:06.363 "target": "spare", 00:25:06.363 "progress": { 00:25:06.363 "blocks": 16384, 00:25:06.363 "percent": 25 00:25:06.363 } 00:25:06.363 }, 00:25:06.363 "base_bdevs_list": [ 00:25:06.363 { 00:25:06.363 "name": "spare", 00:25:06.363 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:06.363 "is_configured": true, 00:25:06.363 "data_offset": 2048, 00:25:06.363 "data_size": 63488 00:25:06.363 }, 00:25:06.363 { 00:25:06.363 "name": null, 00:25:06.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.363 "is_configured": false, 00:25:06.363 "data_offset": 0, 00:25:06.363 "data_size": 63488 00:25:06.363 }, 00:25:06.363 { 00:25:06.363 "name": "BaseBdev3", 00:25:06.363 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:06.363 "is_configured": true, 00:25:06.363 "data_offset": 2048, 00:25:06.363 "data_size": 63488 00:25:06.363 }, 00:25:06.363 { 00:25:06.363 "name": "BaseBdev4", 00:25:06.363 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:06.363 "is_configured": true, 00:25:06.363 "data_offset": 2048, 00:25:06.363 "data_size": 63488 00:25:06.363 } 00:25:06.363 ] 00:25:06.363 }' 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:06.363 23:06:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:06.622 [2024-12-09 23:06:41.761073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:06.879 [2024-12-09 23:06:41.982622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:06.879 [2024-12-09 23:06:41.983044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:07.219 143.00 IOPS, 429.00 MiB/s [2024-12-09T23:06:42.582Z] [2024-12-09 23:06:42.326146] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:07.219 [2024-12-09 23:06:42.438070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:07.480 "name": "raid_bdev1", 00:25:07.480 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:07.480 "strip_size_kb": 0, 00:25:07.480 "state": "online", 00:25:07.480 "raid_level": "raid1", 00:25:07.480 "superblock": true, 00:25:07.480 "num_base_bdevs": 4, 00:25:07.480 "num_base_bdevs_discovered": 3, 00:25:07.480 "num_base_bdevs_operational": 3, 00:25:07.480 "process": { 00:25:07.480 "type": "rebuild", 00:25:07.480 "target": "spare", 00:25:07.480 "progress": { 00:25:07.480 "blocks": 34816, 00:25:07.480 "percent": 54 00:25:07.480 } 00:25:07.480 }, 00:25:07.480 "base_bdevs_list": [ 00:25:07.480 { 00:25:07.480 "name": "spare", 00:25:07.480 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:07.480 "is_configured": true, 00:25:07.480 "data_offset": 2048, 00:25:07.480 "data_size": 63488 00:25:07.480 }, 00:25:07.480 { 00:25:07.480 "name": null, 00:25:07.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.480 "is_configured": false, 00:25:07.480 "data_offset": 0, 00:25:07.480 "data_size": 63488 00:25:07.480 }, 00:25:07.480 { 00:25:07.480 "name": "BaseBdev3", 00:25:07.480 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:07.480 "is_configured": true, 00:25:07.480 "data_offset": 2048, 00:25:07.480 "data_size": 63488 00:25:07.480 }, 00:25:07.480 { 00:25:07.480 "name": "BaseBdev4", 00:25:07.480 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:07.480 "is_configured": true, 00:25:07.480 "data_offset": 2048, 00:25:07.480 "data_size": 63488 00:25:07.480 } 00:25:07.480 ] 00:25:07.480 }' 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:07.480 23:06:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:07.480 [2024-12-09 23:06:42.764680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:07.995 122.60 IOPS, 367.80 MiB/s [2024-12-09T23:06:43.358Z] [2024-12-09 23:06:43.312327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:07.995 [2024-12-09 23:06:43.312709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.561 [2024-12-09 23:06:43.754229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:08.561 "name": "raid_bdev1", 00:25:08.561 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:08.561 "strip_size_kb": 0, 00:25:08.561 "state": "online", 00:25:08.561 "raid_level": "raid1", 00:25:08.561 "superblock": true, 00:25:08.561 "num_base_bdevs": 4, 00:25:08.561 "num_base_bdevs_discovered": 3, 00:25:08.561 "num_base_bdevs_operational": 3, 00:25:08.561 "process": { 00:25:08.561 "type": "rebuild", 00:25:08.561 "target": "spare", 00:25:08.561 "progress": { 00:25:08.561 "blocks": 51200, 00:25:08.561 "percent": 80 00:25:08.561 } 00:25:08.561 }, 00:25:08.561 "base_bdevs_list": [ 00:25:08.561 { 00:25:08.561 "name": "spare", 00:25:08.561 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:08.561 "is_configured": true, 00:25:08.561 "data_offset": 2048, 00:25:08.561 "data_size": 63488 00:25:08.561 }, 00:25:08.561 { 00:25:08.561 "name": null, 00:25:08.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.561 "is_configured": false, 00:25:08.561 "data_offset": 0, 00:25:08.561 "data_size": 63488 00:25:08.561 }, 00:25:08.561 { 00:25:08.561 "name": "BaseBdev3", 00:25:08.561 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:08.561 "is_configured": true, 00:25:08.561 "data_offset": 2048, 00:25:08.561 "data_size": 63488 00:25:08.561 }, 00:25:08.561 { 00:25:08.561 "name": "BaseBdev4", 00:25:08.561 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:08.561 "is_configured": true, 00:25:08.561 "data_offset": 2048, 00:25:08.561 "data_size": 63488 00:25:08.561 } 00:25:08.561 ] 00:25:08.561 }' 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.561 23:06:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:08.822 109.33 IOPS, 328.00 MiB/s [2024-12-09T23:06:44.185Z] [2024-12-09 23:06:44.080437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:25:09.389 [2024-12-09 23:06:44.509158] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:09.389 [2024-12-09 23:06:44.609163] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:09.389 [2024-12-09 23:06:44.616582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:09.647 "name": "raid_bdev1", 00:25:09.647 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:09.647 "strip_size_kb": 0, 00:25:09.647 "state": "online", 00:25:09.647 "raid_level": "raid1", 00:25:09.647 "superblock": true, 00:25:09.647 "num_base_bdevs": 4, 00:25:09.647 "num_base_bdevs_discovered": 3, 00:25:09.647 "num_base_bdevs_operational": 3, 00:25:09.647 "base_bdevs_list": [ 00:25:09.647 { 00:25:09.647 "name": "spare", 00:25:09.647 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": null, 00:25:09.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.647 "is_configured": false, 00:25:09.647 "data_offset": 0, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": "BaseBdev3", 00:25:09.647 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": "BaseBdev4", 00:25:09.647 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 } 00:25:09.647 ] 00:25:09.647 }' 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:09.647 "name": "raid_bdev1", 00:25:09.647 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:09.647 "strip_size_kb": 0, 00:25:09.647 "state": "online", 00:25:09.647 "raid_level": "raid1", 00:25:09.647 "superblock": true, 00:25:09.647 "num_base_bdevs": 4, 00:25:09.647 "num_base_bdevs_discovered": 3, 00:25:09.647 "num_base_bdevs_operational": 3, 00:25:09.647 "base_bdevs_list": [ 00:25:09.647 { 00:25:09.647 "name": "spare", 00:25:09.647 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": null, 00:25:09.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.647 "is_configured": false, 00:25:09.647 "data_offset": 0, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": "BaseBdev3", 00:25:09.647 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 }, 00:25:09.647 { 00:25:09.647 "name": "BaseBdev4", 00:25:09.647 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:09.647 "is_configured": true, 00:25:09.647 "data_offset": 2048, 00:25:09.647 "data_size": 63488 00:25:09.647 } 00:25:09.647 ] 00:25:09.647 }' 00:25:09.647 23:06:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:09.647 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:09.647 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:09.917 98.86 IOPS, 296.57 MiB/s [2024-12-09T23:06:45.280Z] 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.917 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.917 "name": "raid_bdev1", 00:25:09.917 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:09.917 "strip_size_kb": 0, 00:25:09.917 "state": "online", 00:25:09.917 "raid_level": "raid1", 00:25:09.917 "superblock": true, 00:25:09.917 "num_base_bdevs": 4, 00:25:09.917 "num_base_bdevs_discovered": 3, 00:25:09.917 "num_base_bdevs_operational": 3, 00:25:09.917 "base_bdevs_list": [ 00:25:09.917 { 00:25:09.917 "name": "spare", 00:25:09.917 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:09.917 "is_configured": true, 00:25:09.917 "data_offset": 2048, 00:25:09.917 "data_size": 63488 00:25:09.917 }, 00:25:09.917 { 00:25:09.917 "name": null, 00:25:09.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.917 "is_configured": false, 00:25:09.917 "data_offset": 0, 00:25:09.917 "data_size": 63488 00:25:09.917 }, 00:25:09.917 { 00:25:09.917 "name": "BaseBdev3", 00:25:09.918 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:09.918 "is_configured": true, 00:25:09.918 "data_offset": 2048, 00:25:09.918 "data_size": 63488 00:25:09.918 }, 00:25:09.918 { 00:25:09.918 "name": "BaseBdev4", 00:25:09.918 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:09.918 "is_configured": true, 00:25:09.918 "data_offset": 2048, 00:25:09.918 "data_size": 63488 00:25:09.918 } 00:25:09.918 ] 00:25:09.918 }' 00:25:09.918 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.918 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.177 [2024-12-09 23:06:45.351374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.177 [2024-12-09 23:06:45.351403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:10.177 00:25:10.177 Latency(us) 00:25:10.177 [2024-12-09T23:06:45.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.177 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:10.177 raid_bdev1 : 7.31 96.26 288.79 0.00 0.00 14541.56 252.06 115343.36 00:25:10.177 [2024-12-09T23:06:45.540Z] =================================================================================================================== 00:25:10.177 [2024-12-09T23:06:45.540Z] Total : 96.26 288.79 0.00 0.00 14541.56 252.06 115343.36 00:25:10.177 [2024-12-09 23:06:45.383335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.177 [2024-12-09 23:06:45.383396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.177 [2024-12-09 23:06:45.383485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.177 [2024-12-09 23:06:45.383495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:10.177 { 00:25:10.177 "results": [ 00:25:10.177 { 00:25:10.177 "job": "raid_bdev1", 00:25:10.177 "core_mask": "0x1", 00:25:10.177 "workload": "randrw", 00:25:10.177 "percentage": 50, 00:25:10.177 "status": "finished", 00:25:10.177 "queue_depth": 2, 00:25:10.177 "io_size": 3145728, 00:25:10.177 "runtime": 7.313163, 00:25:10.177 "iops": 96.26477626712273, 00:25:10.177 "mibps": 288.7943288013682, 00:25:10.177 "io_failed": 0, 00:25:10.177 "io_timeout": 0, 00:25:10.177 "avg_latency_us": 14541.561958041959, 00:25:10.177 "min_latency_us": 252.06153846153848, 00:25:10.177 "max_latency_us": 115343.36 00:25:10.177 } 00:25:10.177 ], 00:25:10.177 "core_count": 1 00:25:10.177 } 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.177 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:25:10.436 /dev/nbd0 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:10.436 1+0 records in 00:25:10.436 1+0 records out 00:25:10.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196167 s, 20.9 MB/s 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.436 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:10.694 /dev/nbd1 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:10.694 1+0 records in 00:25:10.694 1+0 records out 00:25:10.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256833 s, 15.9 MB/s 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.694 23:06:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:10.694 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:10.953 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:11.210 /dev/nbd1 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:11.210 1+0 records in 00:25:11.210 1+0 records out 00:25:11.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300889 s, 13.6 MB/s 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:11.210 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:11.467 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.726 [2024-12-09 23:06:46.946860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:11.726 [2024-12-09 23:06:46.946917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.726 [2024-12-09 23:06:46.946933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:11.726 [2024-12-09 23:06:46.946943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.726 [2024-12-09 23:06:46.948836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.726 [2024-12-09 23:06:46.948871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:11.726 [2024-12-09 23:06:46.948951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:11.726 [2024-12-09 23:06:46.948992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:11.726 [2024-12-09 23:06:46.949109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:11.726 [2024-12-09 23:06:46.949190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:11.726 spare 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.726 23:06:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.726 [2024-12-09 23:06:47.049266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:11.726 [2024-12-09 23:06:47.049312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:11.726 [2024-12-09 23:06:47.049593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:25:11.726 [2024-12-09 23:06:47.049746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:11.726 [2024-12-09 23:06:47.049753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:11.726 [2024-12-09 23:06:47.049898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:11.726 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.984 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.984 "name": "raid_bdev1", 00:25:11.984 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:11.984 "strip_size_kb": 0, 00:25:11.984 "state": "online", 00:25:11.984 "raid_level": "raid1", 00:25:11.984 "superblock": true, 00:25:11.984 "num_base_bdevs": 4, 00:25:11.984 "num_base_bdevs_discovered": 3, 00:25:11.984 "num_base_bdevs_operational": 3, 00:25:11.984 "base_bdevs_list": [ 00:25:11.984 { 00:25:11.984 "name": "spare", 00:25:11.984 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:11.984 "is_configured": true, 00:25:11.984 "data_offset": 2048, 00:25:11.984 "data_size": 63488 00:25:11.984 }, 00:25:11.984 { 00:25:11.984 "name": null, 00:25:11.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.984 "is_configured": false, 00:25:11.984 "data_offset": 2048, 00:25:11.984 "data_size": 63488 00:25:11.984 }, 00:25:11.984 { 00:25:11.984 "name": "BaseBdev3", 00:25:11.984 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:11.984 "is_configured": true, 00:25:11.984 "data_offset": 2048, 00:25:11.984 "data_size": 63488 00:25:11.984 }, 00:25:11.984 { 00:25:11.984 "name": "BaseBdev4", 00:25:11.984 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:11.984 "is_configured": true, 00:25:11.984 "data_offset": 2048, 00:25:11.984 "data_size": 63488 00:25:11.984 } 00:25:11.984 ] 00:25:11.984 }' 00:25:11.984 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.984 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:12.242 "name": "raid_bdev1", 00:25:12.242 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:12.242 "strip_size_kb": 0, 00:25:12.242 "state": "online", 00:25:12.242 "raid_level": "raid1", 00:25:12.242 "superblock": true, 00:25:12.242 "num_base_bdevs": 4, 00:25:12.242 "num_base_bdevs_discovered": 3, 00:25:12.242 "num_base_bdevs_operational": 3, 00:25:12.242 "base_bdevs_list": [ 00:25:12.242 { 00:25:12.242 "name": "spare", 00:25:12.242 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:12.242 "is_configured": true, 00:25:12.242 "data_offset": 2048, 00:25:12.242 "data_size": 63488 00:25:12.242 }, 00:25:12.242 { 00:25:12.242 "name": null, 00:25:12.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.242 "is_configured": false, 00:25:12.242 "data_offset": 2048, 00:25:12.242 "data_size": 63488 00:25:12.242 }, 00:25:12.242 { 00:25:12.242 "name": "BaseBdev3", 00:25:12.242 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:12.242 "is_configured": true, 00:25:12.242 "data_offset": 2048, 00:25:12.242 "data_size": 63488 00:25:12.242 }, 00:25:12.242 { 00:25:12.242 "name": "BaseBdev4", 00:25:12.242 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:12.242 "is_configured": true, 00:25:12.242 "data_offset": 2048, 00:25:12.242 "data_size": 63488 00:25:12.242 } 00:25:12.242 ] 00:25:12.242 }' 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.242 [2024-12-09 23:06:47.495074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.242 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.243 "name": "raid_bdev1", 00:25:12.243 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:12.243 "strip_size_kb": 0, 00:25:12.243 "state": "online", 00:25:12.243 "raid_level": "raid1", 00:25:12.243 "superblock": true, 00:25:12.243 "num_base_bdevs": 4, 00:25:12.243 "num_base_bdevs_discovered": 2, 00:25:12.243 "num_base_bdevs_operational": 2, 00:25:12.243 "base_bdevs_list": [ 00:25:12.243 { 00:25:12.243 "name": null, 00:25:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.243 "is_configured": false, 00:25:12.243 "data_offset": 0, 00:25:12.243 "data_size": 63488 00:25:12.243 }, 00:25:12.243 { 00:25:12.243 "name": null, 00:25:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.243 "is_configured": false, 00:25:12.243 "data_offset": 2048, 00:25:12.243 "data_size": 63488 00:25:12.243 }, 00:25:12.243 { 00:25:12.243 "name": "BaseBdev3", 00:25:12.243 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:12.243 "is_configured": true, 00:25:12.243 "data_offset": 2048, 00:25:12.243 "data_size": 63488 00:25:12.243 }, 00:25:12.243 { 00:25:12.243 "name": "BaseBdev4", 00:25:12.243 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:12.243 "is_configured": true, 00:25:12.243 "data_offset": 2048, 00:25:12.243 "data_size": 63488 00:25:12.243 } 00:25:12.243 ] 00:25:12.243 }' 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.243 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.520 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:12.520 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.520 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:12.520 [2024-12-09 23:06:47.815190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:12.520 [2024-12-09 23:06:47.815335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:12.520 [2024-12-09 23:06:47.815349] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:12.520 [2024-12-09 23:06:47.815380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:12.520 [2024-12-09 23:06:47.823330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:25:12.520 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.520 23:06:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:12.520 [2024-12-09 23:06:47.824914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:13.895 "name": "raid_bdev1", 00:25:13.895 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:13.895 "strip_size_kb": 0, 00:25:13.895 "state": "online", 00:25:13.895 "raid_level": "raid1", 00:25:13.895 "superblock": true, 00:25:13.895 "num_base_bdevs": 4, 00:25:13.895 "num_base_bdevs_discovered": 3, 00:25:13.895 "num_base_bdevs_operational": 3, 00:25:13.895 "process": { 00:25:13.895 "type": "rebuild", 00:25:13.895 "target": "spare", 00:25:13.895 "progress": { 00:25:13.895 "blocks": 20480, 00:25:13.895 "percent": 32 00:25:13.895 } 00:25:13.895 }, 00:25:13.895 "base_bdevs_list": [ 00:25:13.895 { 00:25:13.895 "name": "spare", 00:25:13.895 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:13.895 "is_configured": true, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 }, 00:25:13.895 { 00:25:13.895 "name": null, 00:25:13.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.895 "is_configured": false, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 }, 00:25:13.895 { 00:25:13.895 "name": "BaseBdev3", 00:25:13.895 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:13.895 "is_configured": true, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 }, 00:25:13.895 { 00:25:13.895 "name": "BaseBdev4", 00:25:13.895 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:13.895 "is_configured": true, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 } 00:25:13.895 ] 00:25:13.895 }' 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.895 23:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:13.895 [2024-12-09 23:06:48.931342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.895 [2024-12-09 23:06:49.030405] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:13.895 [2024-12-09 23:06:49.030478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.895 [2024-12-09 23:06:49.030491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:13.895 [2024-12-09 23:06:49.030499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.895 "name": "raid_bdev1", 00:25:13.895 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:13.895 "strip_size_kb": 0, 00:25:13.895 "state": "online", 00:25:13.895 "raid_level": "raid1", 00:25:13.895 "superblock": true, 00:25:13.895 "num_base_bdevs": 4, 00:25:13.895 "num_base_bdevs_discovered": 2, 00:25:13.895 "num_base_bdevs_operational": 2, 00:25:13.895 "base_bdevs_list": [ 00:25:13.895 { 00:25:13.895 "name": null, 00:25:13.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.895 "is_configured": false, 00:25:13.895 "data_offset": 0, 00:25:13.895 "data_size": 63488 00:25:13.895 }, 00:25:13.895 { 00:25:13.895 "name": null, 00:25:13.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.895 "is_configured": false, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 }, 00:25:13.895 { 00:25:13.895 "name": "BaseBdev3", 00:25:13.895 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:13.895 "is_configured": true, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 }, 00:25:13.895 { 00:25:13.895 "name": "BaseBdev4", 00:25:13.895 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:13.895 "is_configured": true, 00:25:13.895 "data_offset": 2048, 00:25:13.895 "data_size": 63488 00:25:13.895 } 00:25:13.895 ] 00:25:13.895 }' 00:25:13.895 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.896 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:14.156 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:14.156 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.156 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:14.156 [2024-12-09 23:06:49.351877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:14.156 [2024-12-09 23:06:49.351931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.156 [2024-12-09 23:06:49.351955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:14.156 [2024-12-09 23:06:49.351964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.156 [2024-12-09 23:06:49.352355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.156 [2024-12-09 23:06:49.352373] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:14.156 [2024-12-09 23:06:49.352457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:14.156 [2024-12-09 23:06:49.352469] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:14.156 [2024-12-09 23:06:49.352477] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:14.156 [2024-12-09 23:06:49.352500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:14.156 [2024-12-09 23:06:49.360291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:25:14.156 spare 00:25:14.156 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.156 23:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:14.156 [2024-12-09 23:06:49.361875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.090 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:15.090 "name": "raid_bdev1", 00:25:15.090 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:15.090 "strip_size_kb": 0, 00:25:15.090 "state": "online", 00:25:15.090 "raid_level": "raid1", 00:25:15.090 "superblock": true, 00:25:15.090 "num_base_bdevs": 4, 00:25:15.090 "num_base_bdevs_discovered": 3, 00:25:15.090 "num_base_bdevs_operational": 3, 00:25:15.090 "process": { 00:25:15.090 "type": "rebuild", 00:25:15.090 "target": "spare", 00:25:15.090 "progress": { 00:25:15.090 "blocks": 20480, 00:25:15.090 "percent": 32 00:25:15.090 } 00:25:15.090 }, 00:25:15.090 "base_bdevs_list": [ 00:25:15.090 { 00:25:15.090 "name": "spare", 00:25:15.090 "uuid": "a6b8016b-0f9d-5b7e-a63c-3a6152f6bbeb", 00:25:15.090 "is_configured": true, 00:25:15.090 "data_offset": 2048, 00:25:15.090 "data_size": 63488 00:25:15.090 }, 00:25:15.090 { 00:25:15.090 "name": null, 00:25:15.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.090 "is_configured": false, 00:25:15.090 "data_offset": 2048, 00:25:15.090 "data_size": 63488 00:25:15.090 }, 00:25:15.090 { 00:25:15.090 "name": "BaseBdev3", 00:25:15.090 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:15.090 "is_configured": true, 00:25:15.090 "data_offset": 2048, 00:25:15.090 "data_size": 63488 00:25:15.090 }, 00:25:15.090 { 00:25:15.090 "name": "BaseBdev4", 00:25:15.090 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:15.091 "is_configured": true, 00:25:15.091 "data_offset": 2048, 00:25:15.091 "data_size": 63488 00:25:15.091 } 00:25:15.091 ] 00:25:15.091 }' 00:25:15.091 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:15.091 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.091 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.351 [2024-12-09 23:06:50.468298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:15.351 [2024-12-09 23:06:50.567386] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:15.351 [2024-12-09 23:06:50.567449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.351 [2024-12-09 23:06:50.567464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:15.351 [2024-12-09 23:06:50.567470] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.351 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.352 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.352 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.352 "name": "raid_bdev1", 00:25:15.352 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:15.352 "strip_size_kb": 0, 00:25:15.352 "state": "online", 00:25:15.352 "raid_level": "raid1", 00:25:15.352 "superblock": true, 00:25:15.352 "num_base_bdevs": 4, 00:25:15.352 "num_base_bdevs_discovered": 2, 00:25:15.352 "num_base_bdevs_operational": 2, 00:25:15.352 "base_bdevs_list": [ 00:25:15.352 { 00:25:15.352 "name": null, 00:25:15.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.352 "is_configured": false, 00:25:15.352 "data_offset": 0, 00:25:15.352 "data_size": 63488 00:25:15.352 }, 00:25:15.352 { 00:25:15.352 "name": null, 00:25:15.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.352 "is_configured": false, 00:25:15.352 "data_offset": 2048, 00:25:15.352 "data_size": 63488 00:25:15.352 }, 00:25:15.352 { 00:25:15.352 "name": "BaseBdev3", 00:25:15.352 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:15.352 "is_configured": true, 00:25:15.352 "data_offset": 2048, 00:25:15.352 "data_size": 63488 00:25:15.352 }, 00:25:15.352 { 00:25:15.352 "name": "BaseBdev4", 00:25:15.352 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:15.352 "is_configured": true, 00:25:15.352 "data_offset": 2048, 00:25:15.352 "data_size": 63488 00:25:15.352 } 00:25:15.352 ] 00:25:15.352 }' 00:25:15.352 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.352 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.615 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:15.615 "name": "raid_bdev1", 00:25:15.615 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:15.615 "strip_size_kb": 0, 00:25:15.615 "state": "online", 00:25:15.615 "raid_level": "raid1", 00:25:15.615 "superblock": true, 00:25:15.615 "num_base_bdevs": 4, 00:25:15.615 "num_base_bdevs_discovered": 2, 00:25:15.615 "num_base_bdevs_operational": 2, 00:25:15.615 "base_bdevs_list": [ 00:25:15.615 { 00:25:15.615 "name": null, 00:25:15.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.615 "is_configured": false, 00:25:15.615 "data_offset": 0, 00:25:15.615 "data_size": 63488 00:25:15.615 }, 00:25:15.615 { 00:25:15.615 "name": null, 00:25:15.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.615 "is_configured": false, 00:25:15.615 "data_offset": 2048, 00:25:15.615 "data_size": 63488 00:25:15.615 }, 00:25:15.615 { 00:25:15.615 "name": "BaseBdev3", 00:25:15.615 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:15.615 "is_configured": true, 00:25:15.615 "data_offset": 2048, 00:25:15.616 "data_size": 63488 00:25:15.616 }, 00:25:15.616 { 00:25:15.616 "name": "BaseBdev4", 00:25:15.616 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:15.616 "is_configured": true, 00:25:15.616 "data_offset": 2048, 00:25:15.616 "data_size": 63488 00:25:15.616 } 00:25:15.616 ] 00:25:15.616 }' 00:25:15.616 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:15.616 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:15.616 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:15.878 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:15.878 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:15.878 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.878 23:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.878 23:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.878 23:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:15.878 23:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.878 23:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:15.878 [2024-12-09 23:06:51.012590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:15.878 [2024-12-09 23:06:51.012635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.878 [2024-12-09 23:06:51.012652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:25:15.878 [2024-12-09 23:06:51.012659] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.878 [2024-12-09 23:06:51.013015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.878 [2024-12-09 23:06:51.013025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:15.878 [2024-12-09 23:06:51.013088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:15.878 [2024-12-09 23:06:51.013111] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:15.878 [2024-12-09 23:06:51.013119] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:15.878 [2024-12-09 23:06:51.013129] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:15.878 BaseBdev1 00:25:15.878 23:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.878 23:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.821 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.821 "name": "raid_bdev1", 00:25:16.821 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:16.821 "strip_size_kb": 0, 00:25:16.821 "state": "online", 00:25:16.821 "raid_level": "raid1", 00:25:16.821 "superblock": true, 00:25:16.822 "num_base_bdevs": 4, 00:25:16.822 "num_base_bdevs_discovered": 2, 00:25:16.822 "num_base_bdevs_operational": 2, 00:25:16.822 "base_bdevs_list": [ 00:25:16.822 { 00:25:16.822 "name": null, 00:25:16.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.822 "is_configured": false, 00:25:16.822 "data_offset": 0, 00:25:16.822 "data_size": 63488 00:25:16.822 }, 00:25:16.822 { 00:25:16.822 "name": null, 00:25:16.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.822 "is_configured": false, 00:25:16.822 "data_offset": 2048, 00:25:16.822 "data_size": 63488 00:25:16.822 }, 00:25:16.822 { 00:25:16.822 "name": "BaseBdev3", 00:25:16.822 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:16.822 "is_configured": true, 00:25:16.822 "data_offset": 2048, 00:25:16.822 "data_size": 63488 00:25:16.822 }, 00:25:16.822 { 00:25:16.822 "name": "BaseBdev4", 00:25:16.822 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:16.822 "is_configured": true, 00:25:16.822 "data_offset": 2048, 00:25:16.822 "data_size": 63488 00:25:16.822 } 00:25:16.822 ] 00:25:16.822 }' 00:25:16.822 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.822 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:17.083 "name": "raid_bdev1", 00:25:17.083 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:17.083 "strip_size_kb": 0, 00:25:17.083 "state": "online", 00:25:17.083 "raid_level": "raid1", 00:25:17.083 "superblock": true, 00:25:17.083 "num_base_bdevs": 4, 00:25:17.083 "num_base_bdevs_discovered": 2, 00:25:17.083 "num_base_bdevs_operational": 2, 00:25:17.083 "base_bdevs_list": [ 00:25:17.083 { 00:25:17.083 "name": null, 00:25:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.083 "is_configured": false, 00:25:17.083 "data_offset": 0, 00:25:17.083 "data_size": 63488 00:25:17.083 }, 00:25:17.083 { 00:25:17.083 "name": null, 00:25:17.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.083 "is_configured": false, 00:25:17.083 "data_offset": 2048, 00:25:17.083 "data_size": 63488 00:25:17.083 }, 00:25:17.083 { 00:25:17.083 "name": "BaseBdev3", 00:25:17.083 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:17.083 "is_configured": true, 00:25:17.083 "data_offset": 2048, 00:25:17.083 "data_size": 63488 00:25:17.083 }, 00:25:17.083 { 00:25:17.083 "name": "BaseBdev4", 00:25:17.083 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:17.083 "is_configured": true, 00:25:17.083 "data_offset": 2048, 00:25:17.083 "data_size": 63488 00:25:17.083 } 00:25:17.083 ] 00:25:17.083 }' 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:17.083 [2024-12-09 23:06:52.425033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:17.083 [2024-12-09 23:06:52.425162] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:17.083 [2024-12-09 23:06:52.425175] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:17.083 request: 00:25:17.083 { 00:25:17.083 "base_bdev": "BaseBdev1", 00:25:17.083 "raid_bdev": "raid_bdev1", 00:25:17.083 "method": "bdev_raid_add_base_bdev", 00:25:17.083 "req_id": 1 00:25:17.083 } 00:25:17.083 Got JSON-RPC error response 00:25:17.083 response: 00:25:17.083 { 00:25:17.083 "code": -22, 00:25:17.083 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:17.083 } 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:17.083 23:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.466 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.466 "name": "raid_bdev1", 00:25:18.466 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:18.466 "strip_size_kb": 0, 00:25:18.466 "state": "online", 00:25:18.466 "raid_level": "raid1", 00:25:18.466 "superblock": true, 00:25:18.466 "num_base_bdevs": 4, 00:25:18.466 "num_base_bdevs_discovered": 2, 00:25:18.466 "num_base_bdevs_operational": 2, 00:25:18.466 "base_bdevs_list": [ 00:25:18.466 { 00:25:18.466 "name": null, 00:25:18.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.467 "is_configured": false, 00:25:18.467 "data_offset": 0, 00:25:18.467 "data_size": 63488 00:25:18.467 }, 00:25:18.467 { 00:25:18.467 "name": null, 00:25:18.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.467 "is_configured": false, 00:25:18.467 "data_offset": 2048, 00:25:18.467 "data_size": 63488 00:25:18.467 }, 00:25:18.467 { 00:25:18.467 "name": "BaseBdev3", 00:25:18.467 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:18.467 "is_configured": true, 00:25:18.467 "data_offset": 2048, 00:25:18.467 "data_size": 63488 00:25:18.467 }, 00:25:18.467 { 00:25:18.467 "name": "BaseBdev4", 00:25:18.467 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:18.467 "is_configured": true, 00:25:18.467 "data_offset": 2048, 00:25:18.467 "data_size": 63488 00:25:18.467 } 00:25:18.467 ] 00:25:18.467 }' 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:18.467 "name": "raid_bdev1", 00:25:18.467 "uuid": "dd27c68f-87b8-4b80-ab33-fd0f92bf9730", 00:25:18.467 "strip_size_kb": 0, 00:25:18.467 "state": "online", 00:25:18.467 "raid_level": "raid1", 00:25:18.467 "superblock": true, 00:25:18.467 "num_base_bdevs": 4, 00:25:18.467 "num_base_bdevs_discovered": 2, 00:25:18.467 "num_base_bdevs_operational": 2, 00:25:18.467 "base_bdevs_list": [ 00:25:18.467 { 00:25:18.467 "name": null, 00:25:18.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.467 "is_configured": false, 00:25:18.467 "data_offset": 0, 00:25:18.467 "data_size": 63488 00:25:18.467 }, 00:25:18.467 { 00:25:18.467 "name": null, 00:25:18.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.467 "is_configured": false, 00:25:18.467 "data_offset": 2048, 00:25:18.467 "data_size": 63488 00:25:18.467 }, 00:25:18.467 { 00:25:18.467 "name": "BaseBdev3", 00:25:18.467 "uuid": "01f50df9-3fdf-501b-b951-e6ceabfe7945", 00:25:18.467 "is_configured": true, 00:25:18.467 "data_offset": 2048, 00:25:18.467 "data_size": 63488 00:25:18.467 }, 00:25:18.467 { 00:25:18.467 "name": "BaseBdev4", 00:25:18.467 "uuid": "69cd2247-774f-5589-a50e-d70b382b373c", 00:25:18.467 "is_configured": true, 00:25:18.467 "data_offset": 2048, 00:25:18.467 "data_size": 63488 00:25:18.467 } 00:25:18.467 ] 00:25:18.467 }' 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:18.467 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77027 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77027 ']' 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77027 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77027 00:25:18.726 killing process with pid 77027 00:25:18.726 Received shutdown signal, test time was about 15.808918 seconds 00:25:18.726 00:25:18.726 Latency(us) 00:25:18.726 [2024-12-09T23:06:54.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.726 [2024-12-09T23:06:54.089Z] =================================================================================================================== 00:25:18.726 [2024-12-09T23:06:54.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77027' 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77027 00:25:18.726 [2024-12-09 23:06:53.867091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:18.726 [2024-12-09 23:06:53.867200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.726 23:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77027 00:25:18.726 [2024-12-09 23:06:53.867252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.726 [2024-12-09 23:06:53.867262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:18.727 [2024-12-09 23:06:54.073570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:19.671 23:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:25:19.671 00:25:19.671 real 0m18.203s 00:25:19.671 user 0m23.277s 00:25:19.671 sys 0m1.684s 00:25:19.671 ************************************ 00:25:19.671 END TEST raid_rebuild_test_sb_io 00:25:19.671 ************************************ 00:25:19.671 23:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.671 23:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:19.671 23:06:54 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:25:19.671 23:06:54 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:19.671 23:06:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:19.671 23:06:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.671 23:06:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:19.671 ************************************ 00:25:19.671 START TEST raid5f_state_function_test 00:25:19.671 ************************************ 00:25:19.671 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:25:19.671 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:19.671 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:19.671 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:19.671 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:19.671 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:19.672 Process raid pid: 77722 00:25:19.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77722 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77722' 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77722 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 77722 ']' 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.672 23:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.672 [2024-12-09 23:06:54.802808] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:25:19.672 [2024-12-09 23:06:54.803086] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.672 [2024-12-09 23:06:54.958594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.928 [2024-12-09 23:06:55.060304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.928 [2024-12-09 23:06:55.197702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:19.928 [2024-12-09 23:06:55.197878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.497 [2024-12-09 23:06:55.565863] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:20.497 [2024-12-09 23:06:55.566038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:20.497 [2024-12-09 23:06:55.566113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:20.497 [2024-12-09 23:06:55.566144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:20.497 [2024-12-09 23:06:55.566193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:20.497 [2024-12-09 23:06:55.566218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.497 "name": "Existed_Raid", 00:25:20.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.497 "strip_size_kb": 64, 00:25:20.497 "state": "configuring", 00:25:20.497 "raid_level": "raid5f", 00:25:20.497 "superblock": false, 00:25:20.497 "num_base_bdevs": 3, 00:25:20.497 "num_base_bdevs_discovered": 0, 00:25:20.497 "num_base_bdevs_operational": 3, 00:25:20.497 "base_bdevs_list": [ 00:25:20.497 { 00:25:20.497 "name": "BaseBdev1", 00:25:20.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.497 "is_configured": false, 00:25:20.497 "data_offset": 0, 00:25:20.497 "data_size": 0 00:25:20.497 }, 00:25:20.497 { 00:25:20.497 "name": "BaseBdev2", 00:25:20.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.497 "is_configured": false, 00:25:20.497 "data_offset": 0, 00:25:20.497 "data_size": 0 00:25:20.497 }, 00:25:20.497 { 00:25:20.497 "name": "BaseBdev3", 00:25:20.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.497 "is_configured": false, 00:25:20.497 "data_offset": 0, 00:25:20.497 "data_size": 0 00:25:20.497 } 00:25:20.497 ] 00:25:20.497 }' 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.497 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.756 [2024-12-09 23:06:55.881868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:20.756 [2024-12-09 23:06:55.881899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.756 [2024-12-09 23:06:55.889874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:20.756 [2024-12-09 23:06:55.889994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:20.756 [2024-12-09 23:06:55.890053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:20.756 [2024-12-09 23:06:55.890080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:20.756 [2024-12-09 23:06:55.890134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:20.756 [2024-12-09 23:06:55.890160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.756 [2024-12-09 23:06:55.921960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:20.756 BaseBdev1 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.756 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.756 [ 00:25:20.756 { 00:25:20.756 "name": "BaseBdev1", 00:25:20.756 "aliases": [ 00:25:20.756 "c062d03e-dc4c-4b84-84bb-7188eb54d457" 00:25:20.756 ], 00:25:20.756 "product_name": "Malloc disk", 00:25:20.756 "block_size": 512, 00:25:20.756 "num_blocks": 65536, 00:25:20.756 "uuid": "c062d03e-dc4c-4b84-84bb-7188eb54d457", 00:25:20.756 "assigned_rate_limits": { 00:25:20.756 "rw_ios_per_sec": 0, 00:25:20.756 "rw_mbytes_per_sec": 0, 00:25:20.756 "r_mbytes_per_sec": 0, 00:25:20.756 "w_mbytes_per_sec": 0 00:25:20.756 }, 00:25:20.756 "claimed": true, 00:25:20.756 "claim_type": "exclusive_write", 00:25:20.756 "zoned": false, 00:25:20.757 "supported_io_types": { 00:25:20.757 "read": true, 00:25:20.757 "write": true, 00:25:20.757 "unmap": true, 00:25:20.757 "flush": true, 00:25:20.757 "reset": true, 00:25:20.757 "nvme_admin": false, 00:25:20.757 "nvme_io": false, 00:25:20.757 "nvme_io_md": false, 00:25:20.757 "write_zeroes": true, 00:25:20.757 "zcopy": true, 00:25:20.757 "get_zone_info": false, 00:25:20.757 "zone_management": false, 00:25:20.757 "zone_append": false, 00:25:20.757 "compare": false, 00:25:20.757 "compare_and_write": false, 00:25:20.757 "abort": true, 00:25:20.757 "seek_hole": false, 00:25:20.757 "seek_data": false, 00:25:20.757 "copy": true, 00:25:20.757 "nvme_iov_md": false 00:25:20.757 }, 00:25:20.757 "memory_domains": [ 00:25:20.757 { 00:25:20.757 "dma_device_id": "system", 00:25:20.757 "dma_device_type": 1 00:25:20.757 }, 00:25:20.757 { 00:25:20.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.757 "dma_device_type": 2 00:25:20.757 } 00:25:20.757 ], 00:25:20.757 "driver_specific": {} 00:25:20.757 } 00:25:20.757 ] 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.757 "name": "Existed_Raid", 00:25:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.757 "strip_size_kb": 64, 00:25:20.757 "state": "configuring", 00:25:20.757 "raid_level": "raid5f", 00:25:20.757 "superblock": false, 00:25:20.757 "num_base_bdevs": 3, 00:25:20.757 "num_base_bdevs_discovered": 1, 00:25:20.757 "num_base_bdevs_operational": 3, 00:25:20.757 "base_bdevs_list": [ 00:25:20.757 { 00:25:20.757 "name": "BaseBdev1", 00:25:20.757 "uuid": "c062d03e-dc4c-4b84-84bb-7188eb54d457", 00:25:20.757 "is_configured": true, 00:25:20.757 "data_offset": 0, 00:25:20.757 "data_size": 65536 00:25:20.757 }, 00:25:20.757 { 00:25:20.757 "name": "BaseBdev2", 00:25:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.757 "is_configured": false, 00:25:20.757 "data_offset": 0, 00:25:20.757 "data_size": 0 00:25:20.757 }, 00:25:20.757 { 00:25:20.757 "name": "BaseBdev3", 00:25:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.757 "is_configured": false, 00:25:20.757 "data_offset": 0, 00:25:20.757 "data_size": 0 00:25:20.757 } 00:25:20.757 ] 00:25:20.757 }' 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.757 23:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.015 [2024-12-09 23:06:56.254078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:21.015 [2024-12-09 23:06:56.254139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.015 [2024-12-09 23:06:56.262139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.015 [2024-12-09 23:06:56.264052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:21.015 [2024-12-09 23:06:56.264185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:21.015 [2024-12-09 23:06:56.264250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:21.015 [2024-12-09 23:06:56.264278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.015 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.015 "name": "Existed_Raid", 00:25:21.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.015 "strip_size_kb": 64, 00:25:21.015 "state": "configuring", 00:25:21.015 "raid_level": "raid5f", 00:25:21.015 "superblock": false, 00:25:21.015 "num_base_bdevs": 3, 00:25:21.015 "num_base_bdevs_discovered": 1, 00:25:21.015 "num_base_bdevs_operational": 3, 00:25:21.015 "base_bdevs_list": [ 00:25:21.015 { 00:25:21.015 "name": "BaseBdev1", 00:25:21.015 "uuid": "c062d03e-dc4c-4b84-84bb-7188eb54d457", 00:25:21.015 "is_configured": true, 00:25:21.015 "data_offset": 0, 00:25:21.015 "data_size": 65536 00:25:21.015 }, 00:25:21.015 { 00:25:21.015 "name": "BaseBdev2", 00:25:21.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.015 "is_configured": false, 00:25:21.015 "data_offset": 0, 00:25:21.015 "data_size": 0 00:25:21.015 }, 00:25:21.015 { 00:25:21.015 "name": "BaseBdev3", 00:25:21.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.015 "is_configured": false, 00:25:21.015 "data_offset": 0, 00:25:21.015 "data_size": 0 00:25:21.016 } 00:25:21.016 ] 00:25:21.016 }' 00:25:21.016 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.016 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.273 [2024-12-09 23:06:56.600709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.273 BaseBdev2 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.273 [ 00:25:21.273 { 00:25:21.273 "name": "BaseBdev2", 00:25:21.273 "aliases": [ 00:25:21.273 "282ca44e-189d-49c4-a772-249372416d6d" 00:25:21.273 ], 00:25:21.273 "product_name": "Malloc disk", 00:25:21.273 "block_size": 512, 00:25:21.273 "num_blocks": 65536, 00:25:21.273 "uuid": "282ca44e-189d-49c4-a772-249372416d6d", 00:25:21.273 "assigned_rate_limits": { 00:25:21.273 "rw_ios_per_sec": 0, 00:25:21.273 "rw_mbytes_per_sec": 0, 00:25:21.273 "r_mbytes_per_sec": 0, 00:25:21.273 "w_mbytes_per_sec": 0 00:25:21.273 }, 00:25:21.273 "claimed": true, 00:25:21.273 "claim_type": "exclusive_write", 00:25:21.273 "zoned": false, 00:25:21.273 "supported_io_types": { 00:25:21.273 "read": true, 00:25:21.273 "write": true, 00:25:21.273 "unmap": true, 00:25:21.273 "flush": true, 00:25:21.273 "reset": true, 00:25:21.273 "nvme_admin": false, 00:25:21.273 "nvme_io": false, 00:25:21.273 "nvme_io_md": false, 00:25:21.273 "write_zeroes": true, 00:25:21.273 "zcopy": true, 00:25:21.273 "get_zone_info": false, 00:25:21.273 "zone_management": false, 00:25:21.273 "zone_append": false, 00:25:21.273 "compare": false, 00:25:21.273 "compare_and_write": false, 00:25:21.273 "abort": true, 00:25:21.273 "seek_hole": false, 00:25:21.273 "seek_data": false, 00:25:21.273 "copy": true, 00:25:21.273 "nvme_iov_md": false 00:25:21.273 }, 00:25:21.273 "memory_domains": [ 00:25:21.273 { 00:25:21.273 "dma_device_id": "system", 00:25:21.273 "dma_device_type": 1 00:25:21.273 }, 00:25:21.273 { 00:25:21.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.273 "dma_device_type": 2 00:25:21.273 } 00:25:21.273 ], 00:25:21.273 "driver_specific": {} 00:25:21.273 } 00:25:21.273 ] 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.273 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.531 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.531 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.531 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.531 "name": "Existed_Raid", 00:25:21.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.531 "strip_size_kb": 64, 00:25:21.531 "state": "configuring", 00:25:21.531 "raid_level": "raid5f", 00:25:21.531 "superblock": false, 00:25:21.531 "num_base_bdevs": 3, 00:25:21.531 "num_base_bdevs_discovered": 2, 00:25:21.531 "num_base_bdevs_operational": 3, 00:25:21.531 "base_bdevs_list": [ 00:25:21.531 { 00:25:21.531 "name": "BaseBdev1", 00:25:21.531 "uuid": "c062d03e-dc4c-4b84-84bb-7188eb54d457", 00:25:21.531 "is_configured": true, 00:25:21.531 "data_offset": 0, 00:25:21.531 "data_size": 65536 00:25:21.531 }, 00:25:21.531 { 00:25:21.531 "name": "BaseBdev2", 00:25:21.531 "uuid": "282ca44e-189d-49c4-a772-249372416d6d", 00:25:21.531 "is_configured": true, 00:25:21.531 "data_offset": 0, 00:25:21.531 "data_size": 65536 00:25:21.531 }, 00:25:21.531 { 00:25:21.531 "name": "BaseBdev3", 00:25:21.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.531 "is_configured": false, 00:25:21.531 "data_offset": 0, 00:25:21.531 "data_size": 0 00:25:21.531 } 00:25:21.531 ] 00:25:21.531 }' 00:25:21.531 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.531 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 [2024-12-09 23:06:56.975142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:21.870 [2024-12-09 23:06:56.975187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:21.870 [2024-12-09 23:06:56.975200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:21.870 [2024-12-09 23:06:56.975444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:21.870 [2024-12-09 23:06:56.979179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:21.870 [2024-12-09 23:06:56.979197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:21.870 [2024-12-09 23:06:56.979437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.870 BaseBdev3 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 23:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 [ 00:25:21.870 { 00:25:21.870 "name": "BaseBdev3", 00:25:21.870 "aliases": [ 00:25:21.870 "368ad630-1821-462b-80c4-761e9284b334" 00:25:21.870 ], 00:25:21.870 "product_name": "Malloc disk", 00:25:21.870 "block_size": 512, 00:25:21.870 "num_blocks": 65536, 00:25:21.870 "uuid": "368ad630-1821-462b-80c4-761e9284b334", 00:25:21.870 "assigned_rate_limits": { 00:25:21.870 "rw_ios_per_sec": 0, 00:25:21.870 "rw_mbytes_per_sec": 0, 00:25:21.870 "r_mbytes_per_sec": 0, 00:25:21.870 "w_mbytes_per_sec": 0 00:25:21.870 }, 00:25:21.870 "claimed": true, 00:25:21.870 "claim_type": "exclusive_write", 00:25:21.870 "zoned": false, 00:25:21.870 "supported_io_types": { 00:25:21.870 "read": true, 00:25:21.870 "write": true, 00:25:21.870 "unmap": true, 00:25:21.870 "flush": true, 00:25:21.870 "reset": true, 00:25:21.870 "nvme_admin": false, 00:25:21.870 "nvme_io": false, 00:25:21.870 "nvme_io_md": false, 00:25:21.870 "write_zeroes": true, 00:25:21.870 "zcopy": true, 00:25:21.870 "get_zone_info": false, 00:25:21.870 "zone_management": false, 00:25:21.870 "zone_append": false, 00:25:21.870 "compare": false, 00:25:21.870 "compare_and_write": false, 00:25:21.870 "abort": true, 00:25:21.870 "seek_hole": false, 00:25:21.870 "seek_data": false, 00:25:21.870 "copy": true, 00:25:21.870 "nvme_iov_md": false 00:25:21.870 }, 00:25:21.870 "memory_domains": [ 00:25:21.870 { 00:25:21.870 "dma_device_id": "system", 00:25:21.870 "dma_device_type": 1 00:25:21.870 }, 00:25:21.870 { 00:25:21.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.870 "dma_device_type": 2 00:25:21.870 } 00:25:21.870 ], 00:25:21.870 "driver_specific": {} 00:25:21.870 } 00:25:21.870 ] 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.870 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.871 "name": "Existed_Raid", 00:25:21.871 "uuid": "cc337324-ad9f-4d07-af88-58b188a58b44", 00:25:21.871 "strip_size_kb": 64, 00:25:21.871 "state": "online", 00:25:21.871 "raid_level": "raid5f", 00:25:21.871 "superblock": false, 00:25:21.871 "num_base_bdevs": 3, 00:25:21.871 "num_base_bdevs_discovered": 3, 00:25:21.871 "num_base_bdevs_operational": 3, 00:25:21.871 "base_bdevs_list": [ 00:25:21.871 { 00:25:21.871 "name": "BaseBdev1", 00:25:21.871 "uuid": "c062d03e-dc4c-4b84-84bb-7188eb54d457", 00:25:21.871 "is_configured": true, 00:25:21.871 "data_offset": 0, 00:25:21.871 "data_size": 65536 00:25:21.871 }, 00:25:21.871 { 00:25:21.871 "name": "BaseBdev2", 00:25:21.871 "uuid": "282ca44e-189d-49c4-a772-249372416d6d", 00:25:21.871 "is_configured": true, 00:25:21.871 "data_offset": 0, 00:25:21.871 "data_size": 65536 00:25:21.871 }, 00:25:21.871 { 00:25:21.871 "name": "BaseBdev3", 00:25:21.871 "uuid": "368ad630-1821-462b-80c4-761e9284b334", 00:25:21.871 "is_configured": true, 00:25:21.871 "data_offset": 0, 00:25:21.871 "data_size": 65536 00:25:21.871 } 00:25:21.871 ] 00:25:21.871 }' 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.871 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.132 [2024-12-09 23:06:57.331704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:22.132 "name": "Existed_Raid", 00:25:22.132 "aliases": [ 00:25:22.132 "cc337324-ad9f-4d07-af88-58b188a58b44" 00:25:22.132 ], 00:25:22.132 "product_name": "Raid Volume", 00:25:22.132 "block_size": 512, 00:25:22.132 "num_blocks": 131072, 00:25:22.132 "uuid": "cc337324-ad9f-4d07-af88-58b188a58b44", 00:25:22.132 "assigned_rate_limits": { 00:25:22.132 "rw_ios_per_sec": 0, 00:25:22.132 "rw_mbytes_per_sec": 0, 00:25:22.132 "r_mbytes_per_sec": 0, 00:25:22.132 "w_mbytes_per_sec": 0 00:25:22.132 }, 00:25:22.132 "claimed": false, 00:25:22.132 "zoned": false, 00:25:22.132 "supported_io_types": { 00:25:22.132 "read": true, 00:25:22.132 "write": true, 00:25:22.132 "unmap": false, 00:25:22.132 "flush": false, 00:25:22.132 "reset": true, 00:25:22.132 "nvme_admin": false, 00:25:22.132 "nvme_io": false, 00:25:22.132 "nvme_io_md": false, 00:25:22.132 "write_zeroes": true, 00:25:22.132 "zcopy": false, 00:25:22.132 "get_zone_info": false, 00:25:22.132 "zone_management": false, 00:25:22.132 "zone_append": false, 00:25:22.132 "compare": false, 00:25:22.132 "compare_and_write": false, 00:25:22.132 "abort": false, 00:25:22.132 "seek_hole": false, 00:25:22.132 "seek_data": false, 00:25:22.132 "copy": false, 00:25:22.132 "nvme_iov_md": false 00:25:22.132 }, 00:25:22.132 "driver_specific": { 00:25:22.132 "raid": { 00:25:22.132 "uuid": "cc337324-ad9f-4d07-af88-58b188a58b44", 00:25:22.132 "strip_size_kb": 64, 00:25:22.132 "state": "online", 00:25:22.132 "raid_level": "raid5f", 00:25:22.132 "superblock": false, 00:25:22.132 "num_base_bdevs": 3, 00:25:22.132 "num_base_bdevs_discovered": 3, 00:25:22.132 "num_base_bdevs_operational": 3, 00:25:22.132 "base_bdevs_list": [ 00:25:22.132 { 00:25:22.132 "name": "BaseBdev1", 00:25:22.132 "uuid": "c062d03e-dc4c-4b84-84bb-7188eb54d457", 00:25:22.132 "is_configured": true, 00:25:22.132 "data_offset": 0, 00:25:22.132 "data_size": 65536 00:25:22.132 }, 00:25:22.132 { 00:25:22.132 "name": "BaseBdev2", 00:25:22.132 "uuid": "282ca44e-189d-49c4-a772-249372416d6d", 00:25:22.132 "is_configured": true, 00:25:22.132 "data_offset": 0, 00:25:22.132 "data_size": 65536 00:25:22.132 }, 00:25:22.132 { 00:25:22.132 "name": "BaseBdev3", 00:25:22.132 "uuid": "368ad630-1821-462b-80c4-761e9284b334", 00:25:22.132 "is_configured": true, 00:25:22.132 "data_offset": 0, 00:25:22.132 "data_size": 65536 00:25:22.132 } 00:25:22.132 ] 00:25:22.132 } 00:25:22.132 } 00:25:22.132 }' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:22.132 BaseBdev2 00:25:22.132 BaseBdev3' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.132 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:22.133 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.133 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.133 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.133 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:22.133 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:22.133 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.390 [2024-12-09 23:06:57.531576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.390 "name": "Existed_Raid", 00:25:22.390 "uuid": "cc337324-ad9f-4d07-af88-58b188a58b44", 00:25:22.390 "strip_size_kb": 64, 00:25:22.390 "state": "online", 00:25:22.390 "raid_level": "raid5f", 00:25:22.390 "superblock": false, 00:25:22.390 "num_base_bdevs": 3, 00:25:22.390 "num_base_bdevs_discovered": 2, 00:25:22.390 "num_base_bdevs_operational": 2, 00:25:22.390 "base_bdevs_list": [ 00:25:22.390 { 00:25:22.390 "name": null, 00:25:22.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.390 "is_configured": false, 00:25:22.390 "data_offset": 0, 00:25:22.390 "data_size": 65536 00:25:22.390 }, 00:25:22.390 { 00:25:22.390 "name": "BaseBdev2", 00:25:22.390 "uuid": "282ca44e-189d-49c4-a772-249372416d6d", 00:25:22.390 "is_configured": true, 00:25:22.390 "data_offset": 0, 00:25:22.390 "data_size": 65536 00:25:22.390 }, 00:25:22.390 { 00:25:22.390 "name": "BaseBdev3", 00:25:22.390 "uuid": "368ad630-1821-462b-80c4-761e9284b334", 00:25:22.390 "is_configured": true, 00:25:22.390 "data_offset": 0, 00:25:22.390 "data_size": 65536 00:25:22.390 } 00:25:22.390 ] 00:25:22.390 }' 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.390 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:22.646 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.647 [2024-12-09 23:06:57.933317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:22.647 [2024-12-09 23:06:57.933403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.647 [2024-12-09 23:06:57.991586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.647 23:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 [2024-12-09 23:06:58.031643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:22.904 [2024-12-09 23:06:58.031686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 BaseBdev2 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 [ 00:25:22.904 { 00:25:22.904 "name": "BaseBdev2", 00:25:22.904 "aliases": [ 00:25:22.904 "db1ad2c0-95bb-40e0-9504-13a2608a70e2" 00:25:22.904 ], 00:25:22.904 "product_name": "Malloc disk", 00:25:22.904 "block_size": 512, 00:25:22.904 "num_blocks": 65536, 00:25:22.904 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:22.904 "assigned_rate_limits": { 00:25:22.904 "rw_ios_per_sec": 0, 00:25:22.904 "rw_mbytes_per_sec": 0, 00:25:22.904 "r_mbytes_per_sec": 0, 00:25:22.904 "w_mbytes_per_sec": 0 00:25:22.904 }, 00:25:22.904 "claimed": false, 00:25:22.904 "zoned": false, 00:25:22.904 "supported_io_types": { 00:25:22.904 "read": true, 00:25:22.904 "write": true, 00:25:22.904 "unmap": true, 00:25:22.904 "flush": true, 00:25:22.904 "reset": true, 00:25:22.904 "nvme_admin": false, 00:25:22.904 "nvme_io": false, 00:25:22.904 "nvme_io_md": false, 00:25:22.904 "write_zeroes": true, 00:25:22.904 "zcopy": true, 00:25:22.904 "get_zone_info": false, 00:25:22.904 "zone_management": false, 00:25:22.904 "zone_append": false, 00:25:22.904 "compare": false, 00:25:22.904 "compare_and_write": false, 00:25:22.904 "abort": true, 00:25:22.904 "seek_hole": false, 00:25:22.904 "seek_data": false, 00:25:22.904 "copy": true, 00:25:22.904 "nvme_iov_md": false 00:25:22.904 }, 00:25:22.904 "memory_domains": [ 00:25:22.904 { 00:25:22.904 "dma_device_id": "system", 00:25:22.904 "dma_device_type": 1 00:25:22.904 }, 00:25:22.904 { 00:25:22.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.904 "dma_device_type": 2 00:25:22.904 } 00:25:22.904 ], 00:25:22.904 "driver_specific": {} 00:25:22.904 } 00:25:22.904 ] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 BaseBdev3 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.904 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.904 [ 00:25:22.904 { 00:25:22.904 "name": "BaseBdev3", 00:25:22.904 "aliases": [ 00:25:22.904 "326a8d86-54ad-4df0-87df-4632e09d9f0c" 00:25:22.904 ], 00:25:22.904 "product_name": "Malloc disk", 00:25:22.904 "block_size": 512, 00:25:22.904 "num_blocks": 65536, 00:25:22.904 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:22.904 "assigned_rate_limits": { 00:25:22.904 "rw_ios_per_sec": 0, 00:25:22.904 "rw_mbytes_per_sec": 0, 00:25:22.904 "r_mbytes_per_sec": 0, 00:25:22.904 "w_mbytes_per_sec": 0 00:25:22.904 }, 00:25:22.904 "claimed": false, 00:25:22.904 "zoned": false, 00:25:22.904 "supported_io_types": { 00:25:22.904 "read": true, 00:25:22.904 "write": true, 00:25:22.904 "unmap": true, 00:25:22.904 "flush": true, 00:25:22.904 "reset": true, 00:25:22.904 "nvme_admin": false, 00:25:22.904 "nvme_io": false, 00:25:22.904 "nvme_io_md": false, 00:25:22.904 "write_zeroes": true, 00:25:22.904 "zcopy": true, 00:25:22.904 "get_zone_info": false, 00:25:22.905 "zone_management": false, 00:25:22.905 "zone_append": false, 00:25:22.905 "compare": false, 00:25:22.905 "compare_and_write": false, 00:25:22.905 "abort": true, 00:25:22.905 "seek_hole": false, 00:25:22.905 "seek_data": false, 00:25:22.905 "copy": true, 00:25:22.905 "nvme_iov_md": false 00:25:22.905 }, 00:25:22.905 "memory_domains": [ 00:25:22.905 { 00:25:22.905 "dma_device_id": "system", 00:25:22.905 "dma_device_type": 1 00:25:22.905 }, 00:25:22.905 { 00:25:22.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.905 "dma_device_type": 2 00:25:22.905 } 00:25:22.905 ], 00:25:22.905 "driver_specific": {} 00:25:22.905 } 00:25:22.905 ] 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.905 [2024-12-09 23:06:58.238238] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:22.905 [2024-12-09 23:06:58.238380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:22.905 [2024-12-09 23:06:58.238454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:22.905 [2024-12-09 23:06:58.240260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.905 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.162 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.162 "name": "Existed_Raid", 00:25:23.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.162 "strip_size_kb": 64, 00:25:23.162 "state": "configuring", 00:25:23.162 "raid_level": "raid5f", 00:25:23.162 "superblock": false, 00:25:23.162 "num_base_bdevs": 3, 00:25:23.162 "num_base_bdevs_discovered": 2, 00:25:23.162 "num_base_bdevs_operational": 3, 00:25:23.162 "base_bdevs_list": [ 00:25:23.162 { 00:25:23.162 "name": "BaseBdev1", 00:25:23.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.162 "is_configured": false, 00:25:23.162 "data_offset": 0, 00:25:23.162 "data_size": 0 00:25:23.162 }, 00:25:23.162 { 00:25:23.162 "name": "BaseBdev2", 00:25:23.162 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:23.162 "is_configured": true, 00:25:23.162 "data_offset": 0, 00:25:23.162 "data_size": 65536 00:25:23.162 }, 00:25:23.162 { 00:25:23.162 "name": "BaseBdev3", 00:25:23.162 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:23.162 "is_configured": true, 00:25:23.162 "data_offset": 0, 00:25:23.162 "data_size": 65536 00:25:23.162 } 00:25:23.162 ] 00:25:23.162 }' 00:25:23.162 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.162 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.420 [2024-12-09 23:06:58.554307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.420 "name": "Existed_Raid", 00:25:23.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.420 "strip_size_kb": 64, 00:25:23.420 "state": "configuring", 00:25:23.420 "raid_level": "raid5f", 00:25:23.420 "superblock": false, 00:25:23.420 "num_base_bdevs": 3, 00:25:23.420 "num_base_bdevs_discovered": 1, 00:25:23.420 "num_base_bdevs_operational": 3, 00:25:23.420 "base_bdevs_list": [ 00:25:23.420 { 00:25:23.420 "name": "BaseBdev1", 00:25:23.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.420 "is_configured": false, 00:25:23.420 "data_offset": 0, 00:25:23.420 "data_size": 0 00:25:23.420 }, 00:25:23.420 { 00:25:23.420 "name": null, 00:25:23.420 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:23.420 "is_configured": false, 00:25:23.420 "data_offset": 0, 00:25:23.420 "data_size": 65536 00:25:23.420 }, 00:25:23.420 { 00:25:23.420 "name": "BaseBdev3", 00:25:23.420 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:23.420 "is_configured": true, 00:25:23.420 "data_offset": 0, 00:25:23.420 "data_size": 65536 00:25:23.420 } 00:25:23.420 ] 00:25:23.420 }' 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.420 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.679 [2024-12-09 23:06:58.912411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.679 BaseBdev1 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.679 [ 00:25:23.679 { 00:25:23.679 "name": "BaseBdev1", 00:25:23.679 "aliases": [ 00:25:23.679 "71025faf-1aae-4bfb-ae39-8e6a18b2c68f" 00:25:23.679 ], 00:25:23.679 "product_name": "Malloc disk", 00:25:23.679 "block_size": 512, 00:25:23.679 "num_blocks": 65536, 00:25:23.679 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:23.679 "assigned_rate_limits": { 00:25:23.679 "rw_ios_per_sec": 0, 00:25:23.679 "rw_mbytes_per_sec": 0, 00:25:23.679 "r_mbytes_per_sec": 0, 00:25:23.679 "w_mbytes_per_sec": 0 00:25:23.679 }, 00:25:23.679 "claimed": true, 00:25:23.679 "claim_type": "exclusive_write", 00:25:23.679 "zoned": false, 00:25:23.679 "supported_io_types": { 00:25:23.679 "read": true, 00:25:23.679 "write": true, 00:25:23.679 "unmap": true, 00:25:23.679 "flush": true, 00:25:23.679 "reset": true, 00:25:23.679 "nvme_admin": false, 00:25:23.679 "nvme_io": false, 00:25:23.679 "nvme_io_md": false, 00:25:23.679 "write_zeroes": true, 00:25:23.679 "zcopy": true, 00:25:23.679 "get_zone_info": false, 00:25:23.679 "zone_management": false, 00:25:23.679 "zone_append": false, 00:25:23.679 "compare": false, 00:25:23.679 "compare_and_write": false, 00:25:23.679 "abort": true, 00:25:23.679 "seek_hole": false, 00:25:23.679 "seek_data": false, 00:25:23.679 "copy": true, 00:25:23.679 "nvme_iov_md": false 00:25:23.679 }, 00:25:23.679 "memory_domains": [ 00:25:23.679 { 00:25:23.679 "dma_device_id": "system", 00:25:23.679 "dma_device_type": 1 00:25:23.679 }, 00:25:23.679 { 00:25:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.679 "dma_device_type": 2 00:25:23.679 } 00:25:23.679 ], 00:25:23.679 "driver_specific": {} 00:25:23.679 } 00:25:23.679 ] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.679 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.680 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.680 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.680 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.680 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.680 "name": "Existed_Raid", 00:25:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.680 "strip_size_kb": 64, 00:25:23.680 "state": "configuring", 00:25:23.680 "raid_level": "raid5f", 00:25:23.680 "superblock": false, 00:25:23.680 "num_base_bdevs": 3, 00:25:23.680 "num_base_bdevs_discovered": 2, 00:25:23.680 "num_base_bdevs_operational": 3, 00:25:23.680 "base_bdevs_list": [ 00:25:23.680 { 00:25:23.680 "name": "BaseBdev1", 00:25:23.680 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:23.680 "is_configured": true, 00:25:23.680 "data_offset": 0, 00:25:23.680 "data_size": 65536 00:25:23.680 }, 00:25:23.680 { 00:25:23.680 "name": null, 00:25:23.680 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:23.680 "is_configured": false, 00:25:23.680 "data_offset": 0, 00:25:23.680 "data_size": 65536 00:25:23.680 }, 00:25:23.680 { 00:25:23.680 "name": "BaseBdev3", 00:25:23.680 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:23.680 "is_configured": true, 00:25:23.680 "data_offset": 0, 00:25:23.680 "data_size": 65536 00:25:23.680 } 00:25:23.680 ] 00:25:23.680 }' 00:25:23.680 23:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.680 23:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.936 [2024-12-09 23:06:59.284531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.936 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.193 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.193 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.193 "name": "Existed_Raid", 00:25:24.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.193 "strip_size_kb": 64, 00:25:24.193 "state": "configuring", 00:25:24.193 "raid_level": "raid5f", 00:25:24.193 "superblock": false, 00:25:24.193 "num_base_bdevs": 3, 00:25:24.193 "num_base_bdevs_discovered": 1, 00:25:24.193 "num_base_bdevs_operational": 3, 00:25:24.193 "base_bdevs_list": [ 00:25:24.193 { 00:25:24.193 "name": "BaseBdev1", 00:25:24.193 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:24.193 "is_configured": true, 00:25:24.193 "data_offset": 0, 00:25:24.193 "data_size": 65536 00:25:24.193 }, 00:25:24.193 { 00:25:24.193 "name": null, 00:25:24.193 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:24.193 "is_configured": false, 00:25:24.193 "data_offset": 0, 00:25:24.193 "data_size": 65536 00:25:24.193 }, 00:25:24.193 { 00:25:24.193 "name": null, 00:25:24.193 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:24.193 "is_configured": false, 00:25:24.193 "data_offset": 0, 00:25:24.193 "data_size": 65536 00:25:24.193 } 00:25:24.193 ] 00:25:24.193 }' 00:25:24.193 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.193 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.451 [2024-12-09 23:06:59.632624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.451 "name": "Existed_Raid", 00:25:24.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.451 "strip_size_kb": 64, 00:25:24.451 "state": "configuring", 00:25:24.451 "raid_level": "raid5f", 00:25:24.451 "superblock": false, 00:25:24.451 "num_base_bdevs": 3, 00:25:24.451 "num_base_bdevs_discovered": 2, 00:25:24.451 "num_base_bdevs_operational": 3, 00:25:24.451 "base_bdevs_list": [ 00:25:24.451 { 00:25:24.451 "name": "BaseBdev1", 00:25:24.451 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:24.451 "is_configured": true, 00:25:24.451 "data_offset": 0, 00:25:24.451 "data_size": 65536 00:25:24.451 }, 00:25:24.451 { 00:25:24.451 "name": null, 00:25:24.451 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:24.451 "is_configured": false, 00:25:24.451 "data_offset": 0, 00:25:24.451 "data_size": 65536 00:25:24.451 }, 00:25:24.451 { 00:25:24.451 "name": "BaseBdev3", 00:25:24.451 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:24.451 "is_configured": true, 00:25:24.451 "data_offset": 0, 00:25:24.451 "data_size": 65536 00:25:24.451 } 00:25:24.451 ] 00:25:24.451 }' 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.451 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.709 23:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.709 [2024-12-09 23:06:59.968679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.709 "name": "Existed_Raid", 00:25:24.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.709 "strip_size_kb": 64, 00:25:24.709 "state": "configuring", 00:25:24.709 "raid_level": "raid5f", 00:25:24.709 "superblock": false, 00:25:24.709 "num_base_bdevs": 3, 00:25:24.709 "num_base_bdevs_discovered": 1, 00:25:24.709 "num_base_bdevs_operational": 3, 00:25:24.709 "base_bdevs_list": [ 00:25:24.709 { 00:25:24.709 "name": null, 00:25:24.709 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:24.709 "is_configured": false, 00:25:24.709 "data_offset": 0, 00:25:24.709 "data_size": 65536 00:25:24.709 }, 00:25:24.709 { 00:25:24.709 "name": null, 00:25:24.709 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:24.709 "is_configured": false, 00:25:24.709 "data_offset": 0, 00:25:24.709 "data_size": 65536 00:25:24.709 }, 00:25:24.709 { 00:25:24.709 "name": "BaseBdev3", 00:25:24.709 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:24.709 "is_configured": true, 00:25:24.709 "data_offset": 0, 00:25:24.709 "data_size": 65536 00:25:24.709 } 00:25:24.709 ] 00:25:24.709 }' 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.709 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.275 [2024-12-09 23:07:00.366328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.275 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.275 "name": "Existed_Raid", 00:25:25.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.275 "strip_size_kb": 64, 00:25:25.275 "state": "configuring", 00:25:25.276 "raid_level": "raid5f", 00:25:25.276 "superblock": false, 00:25:25.276 "num_base_bdevs": 3, 00:25:25.276 "num_base_bdevs_discovered": 2, 00:25:25.276 "num_base_bdevs_operational": 3, 00:25:25.276 "base_bdevs_list": [ 00:25:25.276 { 00:25:25.276 "name": null, 00:25:25.276 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:25.276 "is_configured": false, 00:25:25.276 "data_offset": 0, 00:25:25.276 "data_size": 65536 00:25:25.276 }, 00:25:25.276 { 00:25:25.276 "name": "BaseBdev2", 00:25:25.276 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:25.276 "is_configured": true, 00:25:25.276 "data_offset": 0, 00:25:25.276 "data_size": 65536 00:25:25.276 }, 00:25:25.276 { 00:25:25.276 "name": "BaseBdev3", 00:25:25.276 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:25.276 "is_configured": true, 00:25:25.276 "data_offset": 0, 00:25:25.276 "data_size": 65536 00:25:25.276 } 00:25:25.276 ] 00:25:25.276 }' 00:25:25.276 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.276 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.533 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 71025faf-1aae-4bfb-ae39-8e6a18b2c68f 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.534 [2024-12-09 23:07:00.797013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:25.534 [2024-12-09 23:07:00.797052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:25.534 [2024-12-09 23:07:00.797059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:25.534 [2024-12-09 23:07:00.797331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:25.534 [2024-12-09 23:07:00.800262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:25.534 [2024-12-09 23:07:00.800341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:25.534 [2024-12-09 23:07:00.800610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.534 NewBaseBdev 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.534 [ 00:25:25.534 { 00:25:25.534 "name": "NewBaseBdev", 00:25:25.534 "aliases": [ 00:25:25.534 "71025faf-1aae-4bfb-ae39-8e6a18b2c68f" 00:25:25.534 ], 00:25:25.534 "product_name": "Malloc disk", 00:25:25.534 "block_size": 512, 00:25:25.534 "num_blocks": 65536, 00:25:25.534 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:25.534 "assigned_rate_limits": { 00:25:25.534 "rw_ios_per_sec": 0, 00:25:25.534 "rw_mbytes_per_sec": 0, 00:25:25.534 "r_mbytes_per_sec": 0, 00:25:25.534 "w_mbytes_per_sec": 0 00:25:25.534 }, 00:25:25.534 "claimed": true, 00:25:25.534 "claim_type": "exclusive_write", 00:25:25.534 "zoned": false, 00:25:25.534 "supported_io_types": { 00:25:25.534 "read": true, 00:25:25.534 "write": true, 00:25:25.534 "unmap": true, 00:25:25.534 "flush": true, 00:25:25.534 "reset": true, 00:25:25.534 "nvme_admin": false, 00:25:25.534 "nvme_io": false, 00:25:25.534 "nvme_io_md": false, 00:25:25.534 "write_zeroes": true, 00:25:25.534 "zcopy": true, 00:25:25.534 "get_zone_info": false, 00:25:25.534 "zone_management": false, 00:25:25.534 "zone_append": false, 00:25:25.534 "compare": false, 00:25:25.534 "compare_and_write": false, 00:25:25.534 "abort": true, 00:25:25.534 "seek_hole": false, 00:25:25.534 "seek_data": false, 00:25:25.534 "copy": true, 00:25:25.534 "nvme_iov_md": false 00:25:25.534 }, 00:25:25.534 "memory_domains": [ 00:25:25.534 { 00:25:25.534 "dma_device_id": "system", 00:25:25.534 "dma_device_type": 1 00:25:25.534 }, 00:25:25.534 { 00:25:25.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.534 "dma_device_type": 2 00:25:25.534 } 00:25:25.534 ], 00:25:25.534 "driver_specific": {} 00:25:25.534 } 00:25:25.534 ] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.534 "name": "Existed_Raid", 00:25:25.534 "uuid": "a19d1133-eba4-4886-afab-248982d33af0", 00:25:25.534 "strip_size_kb": 64, 00:25:25.534 "state": "online", 00:25:25.534 "raid_level": "raid5f", 00:25:25.534 "superblock": false, 00:25:25.534 "num_base_bdevs": 3, 00:25:25.534 "num_base_bdevs_discovered": 3, 00:25:25.534 "num_base_bdevs_operational": 3, 00:25:25.534 "base_bdevs_list": [ 00:25:25.534 { 00:25:25.534 "name": "NewBaseBdev", 00:25:25.534 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:25.534 "is_configured": true, 00:25:25.534 "data_offset": 0, 00:25:25.534 "data_size": 65536 00:25:25.534 }, 00:25:25.534 { 00:25:25.534 "name": "BaseBdev2", 00:25:25.534 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:25.534 "is_configured": true, 00:25:25.534 "data_offset": 0, 00:25:25.534 "data_size": 65536 00:25:25.534 }, 00:25:25.534 { 00:25:25.534 "name": "BaseBdev3", 00:25:25.534 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:25.534 "is_configured": true, 00:25:25.534 "data_offset": 0, 00:25:25.534 "data_size": 65536 00:25:25.534 } 00:25:25.534 ] 00:25:25.534 }' 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.534 23:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.814 [2024-12-09 23:07:01.160108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.814 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.072 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:26.072 "name": "Existed_Raid", 00:25:26.072 "aliases": [ 00:25:26.072 "a19d1133-eba4-4886-afab-248982d33af0" 00:25:26.072 ], 00:25:26.072 "product_name": "Raid Volume", 00:25:26.072 "block_size": 512, 00:25:26.072 "num_blocks": 131072, 00:25:26.072 "uuid": "a19d1133-eba4-4886-afab-248982d33af0", 00:25:26.072 "assigned_rate_limits": { 00:25:26.072 "rw_ios_per_sec": 0, 00:25:26.072 "rw_mbytes_per_sec": 0, 00:25:26.072 "r_mbytes_per_sec": 0, 00:25:26.072 "w_mbytes_per_sec": 0 00:25:26.072 }, 00:25:26.072 "claimed": false, 00:25:26.072 "zoned": false, 00:25:26.072 "supported_io_types": { 00:25:26.072 "read": true, 00:25:26.072 "write": true, 00:25:26.072 "unmap": false, 00:25:26.072 "flush": false, 00:25:26.072 "reset": true, 00:25:26.072 "nvme_admin": false, 00:25:26.072 "nvme_io": false, 00:25:26.072 "nvme_io_md": false, 00:25:26.072 "write_zeroes": true, 00:25:26.072 "zcopy": false, 00:25:26.072 "get_zone_info": false, 00:25:26.072 "zone_management": false, 00:25:26.072 "zone_append": false, 00:25:26.072 "compare": false, 00:25:26.072 "compare_and_write": false, 00:25:26.072 "abort": false, 00:25:26.072 "seek_hole": false, 00:25:26.072 "seek_data": false, 00:25:26.072 "copy": false, 00:25:26.072 "nvme_iov_md": false 00:25:26.072 }, 00:25:26.072 "driver_specific": { 00:25:26.072 "raid": { 00:25:26.072 "uuid": "a19d1133-eba4-4886-afab-248982d33af0", 00:25:26.072 "strip_size_kb": 64, 00:25:26.072 "state": "online", 00:25:26.072 "raid_level": "raid5f", 00:25:26.072 "superblock": false, 00:25:26.072 "num_base_bdevs": 3, 00:25:26.072 "num_base_bdevs_discovered": 3, 00:25:26.072 "num_base_bdevs_operational": 3, 00:25:26.072 "base_bdevs_list": [ 00:25:26.072 { 00:25:26.072 "name": "NewBaseBdev", 00:25:26.072 "uuid": "71025faf-1aae-4bfb-ae39-8e6a18b2c68f", 00:25:26.072 "is_configured": true, 00:25:26.072 "data_offset": 0, 00:25:26.072 "data_size": 65536 00:25:26.072 }, 00:25:26.072 { 00:25:26.072 "name": "BaseBdev2", 00:25:26.072 "uuid": "db1ad2c0-95bb-40e0-9504-13a2608a70e2", 00:25:26.072 "is_configured": true, 00:25:26.072 "data_offset": 0, 00:25:26.072 "data_size": 65536 00:25:26.072 }, 00:25:26.072 { 00:25:26.072 "name": "BaseBdev3", 00:25:26.073 "uuid": "326a8d86-54ad-4df0-87df-4632e09d9f0c", 00:25:26.073 "is_configured": true, 00:25:26.073 "data_offset": 0, 00:25:26.073 "data_size": 65536 00:25:26.073 } 00:25:26.073 ] 00:25:26.073 } 00:25:26.073 } 00:25:26.073 }' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:26.073 BaseBdev2 00:25:26.073 BaseBdev3' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.073 [2024-12-09 23:07:01.331962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:26.073 [2024-12-09 23:07:01.331983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:26.073 [2024-12-09 23:07:01.332041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:26.073 [2024-12-09 23:07:01.332295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:26.073 [2024-12-09 23:07:01.332306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77722 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 77722 ']' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 77722 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77722 00:25:26.073 killing process with pid 77722 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77722' 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 77722 00:25:26.073 [2024-12-09 23:07:01.356768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:26.073 23:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 77722 00:25:26.342 [2024-12-09 23:07:01.508272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:26.911 ************************************ 00:25:26.911 END TEST raid5f_state_function_test 00:25:26.911 ************************************ 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:26.911 00:25:26.911 real 0m7.366s 00:25:26.911 user 0m11.793s 00:25:26.911 sys 0m1.254s 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.911 23:07:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:25:26.911 23:07:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:26.911 23:07:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.911 23:07:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:26.911 ************************************ 00:25:26.911 START TEST raid5f_state_function_test_sb 00:25:26.911 ************************************ 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:26.911 Process raid pid: 78310 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78310 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78310' 00:25:26.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78310 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78310 ']' 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.911 23:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:26.911 [2024-12-09 23:07:02.231267] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:25:26.911 [2024-12-09 23:07:02.231396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.168 [2024-12-09 23:07:02.386059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.168 [2024-12-09 23:07:02.525479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.425 [2024-12-09 23:07:02.665863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.425 [2024-12-09 23:07:02.665903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.991 [2024-12-09 23:07:03.089756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:27.991 [2024-12-09 23:07:03.089816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:27.991 [2024-12-09 23:07:03.089831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:27.991 [2024-12-09 23:07:03.089841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:27.991 [2024-12-09 23:07:03.089848] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:27.991 [2024-12-09 23:07:03.089856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.991 "name": "Existed_Raid", 00:25:27.991 "uuid": "a55b0d63-189c-425b-89ce-962993a50f29", 00:25:27.991 "strip_size_kb": 64, 00:25:27.991 "state": "configuring", 00:25:27.991 "raid_level": "raid5f", 00:25:27.991 "superblock": true, 00:25:27.991 "num_base_bdevs": 3, 00:25:27.991 "num_base_bdevs_discovered": 0, 00:25:27.991 "num_base_bdevs_operational": 3, 00:25:27.991 "base_bdevs_list": [ 00:25:27.991 { 00:25:27.991 "name": "BaseBdev1", 00:25:27.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.991 "is_configured": false, 00:25:27.991 "data_offset": 0, 00:25:27.991 "data_size": 0 00:25:27.991 }, 00:25:27.991 { 00:25:27.991 "name": "BaseBdev2", 00:25:27.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.991 "is_configured": false, 00:25:27.991 "data_offset": 0, 00:25:27.991 "data_size": 0 00:25:27.991 }, 00:25:27.991 { 00:25:27.991 "name": "BaseBdev3", 00:25:27.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.991 "is_configured": false, 00:25:27.991 "data_offset": 0, 00:25:27.991 "data_size": 0 00:25:27.991 } 00:25:27.991 ] 00:25:27.991 }' 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.991 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 [2024-12-09 23:07:03.417755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:28.249 [2024-12-09 23:07:03.417789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 [2024-12-09 23:07:03.425774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.249 [2024-12-09 23:07:03.425817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.249 [2024-12-09 23:07:03.425826] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.249 [2024-12-09 23:07:03.425835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.249 [2024-12-09 23:07:03.425841] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:28.249 [2024-12-09 23:07:03.425850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 [2024-12-09 23:07:03.458413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:28.249 BaseBdev1 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.249 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.249 [ 00:25:28.249 { 00:25:28.249 "name": "BaseBdev1", 00:25:28.249 "aliases": [ 00:25:28.249 "5ebf61e9-2372-4a18-97af-4cfa7d4387ab" 00:25:28.249 ], 00:25:28.249 "product_name": "Malloc disk", 00:25:28.249 "block_size": 512, 00:25:28.249 "num_blocks": 65536, 00:25:28.249 "uuid": "5ebf61e9-2372-4a18-97af-4cfa7d4387ab", 00:25:28.249 "assigned_rate_limits": { 00:25:28.249 "rw_ios_per_sec": 0, 00:25:28.249 "rw_mbytes_per_sec": 0, 00:25:28.249 "r_mbytes_per_sec": 0, 00:25:28.249 "w_mbytes_per_sec": 0 00:25:28.249 }, 00:25:28.249 "claimed": true, 00:25:28.249 "claim_type": "exclusive_write", 00:25:28.249 "zoned": false, 00:25:28.249 "supported_io_types": { 00:25:28.249 "read": true, 00:25:28.249 "write": true, 00:25:28.249 "unmap": true, 00:25:28.249 "flush": true, 00:25:28.249 "reset": true, 00:25:28.249 "nvme_admin": false, 00:25:28.249 "nvme_io": false, 00:25:28.249 "nvme_io_md": false, 00:25:28.249 "write_zeroes": true, 00:25:28.249 "zcopy": true, 00:25:28.249 "get_zone_info": false, 00:25:28.249 "zone_management": false, 00:25:28.249 "zone_append": false, 00:25:28.249 "compare": false, 00:25:28.249 "compare_and_write": false, 00:25:28.249 "abort": true, 00:25:28.249 "seek_hole": false, 00:25:28.249 "seek_data": false, 00:25:28.250 "copy": true, 00:25:28.250 "nvme_iov_md": false 00:25:28.250 }, 00:25:28.250 "memory_domains": [ 00:25:28.250 { 00:25:28.250 "dma_device_id": "system", 00:25:28.250 "dma_device_type": 1 00:25:28.250 }, 00:25:28.250 { 00:25:28.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.250 "dma_device_type": 2 00:25:28.250 } 00:25:28.250 ], 00:25:28.250 "driver_specific": {} 00:25:28.250 } 00:25:28.250 ] 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.250 "name": "Existed_Raid", 00:25:28.250 "uuid": "84b509f1-75e7-4c53-80f9-fb596e95177f", 00:25:28.250 "strip_size_kb": 64, 00:25:28.250 "state": "configuring", 00:25:28.250 "raid_level": "raid5f", 00:25:28.250 "superblock": true, 00:25:28.250 "num_base_bdevs": 3, 00:25:28.250 "num_base_bdevs_discovered": 1, 00:25:28.250 "num_base_bdevs_operational": 3, 00:25:28.250 "base_bdevs_list": [ 00:25:28.250 { 00:25:28.250 "name": "BaseBdev1", 00:25:28.250 "uuid": "5ebf61e9-2372-4a18-97af-4cfa7d4387ab", 00:25:28.250 "is_configured": true, 00:25:28.250 "data_offset": 2048, 00:25:28.250 "data_size": 63488 00:25:28.250 }, 00:25:28.250 { 00:25:28.250 "name": "BaseBdev2", 00:25:28.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.250 "is_configured": false, 00:25:28.250 "data_offset": 0, 00:25:28.250 "data_size": 0 00:25:28.250 }, 00:25:28.250 { 00:25:28.250 "name": "BaseBdev3", 00:25:28.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.250 "is_configured": false, 00:25:28.250 "data_offset": 0, 00:25:28.250 "data_size": 0 00:25:28.250 } 00:25:28.250 ] 00:25:28.250 }' 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.250 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.507 [2024-12-09 23:07:03.802546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:28.507 [2024-12-09 23:07:03.802711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.507 [2024-12-09 23:07:03.810603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:28.507 [2024-12-09 23:07:03.812483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.507 [2024-12-09 23:07:03.812525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.507 [2024-12-09 23:07:03.812534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:28.507 [2024-12-09 23:07:03.812543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.507 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.507 "name": "Existed_Raid", 00:25:28.507 "uuid": "6aea1781-3e77-449b-b20a-1c9d4772d402", 00:25:28.507 "strip_size_kb": 64, 00:25:28.507 "state": "configuring", 00:25:28.507 "raid_level": "raid5f", 00:25:28.507 "superblock": true, 00:25:28.507 "num_base_bdevs": 3, 00:25:28.507 "num_base_bdevs_discovered": 1, 00:25:28.507 "num_base_bdevs_operational": 3, 00:25:28.507 "base_bdevs_list": [ 00:25:28.507 { 00:25:28.507 "name": "BaseBdev1", 00:25:28.507 "uuid": "5ebf61e9-2372-4a18-97af-4cfa7d4387ab", 00:25:28.507 "is_configured": true, 00:25:28.507 "data_offset": 2048, 00:25:28.507 "data_size": 63488 00:25:28.507 }, 00:25:28.507 { 00:25:28.507 "name": "BaseBdev2", 00:25:28.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.507 "is_configured": false, 00:25:28.507 "data_offset": 0, 00:25:28.508 "data_size": 0 00:25:28.508 }, 00:25:28.508 { 00:25:28.508 "name": "BaseBdev3", 00:25:28.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.508 "is_configured": false, 00:25:28.508 "data_offset": 0, 00:25:28.508 "data_size": 0 00:25:28.508 } 00:25:28.508 ] 00:25:28.508 }' 00:25:28.508 23:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.508 23:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.780 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:28.780 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.780 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.038 [2024-12-09 23:07:04.165847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:29.038 BaseBdev2 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.038 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.038 [ 00:25:29.038 { 00:25:29.038 "name": "BaseBdev2", 00:25:29.038 "aliases": [ 00:25:29.038 "cb6383bf-88b4-4f20-991d-cc391397efbc" 00:25:29.038 ], 00:25:29.038 "product_name": "Malloc disk", 00:25:29.038 "block_size": 512, 00:25:29.038 "num_blocks": 65536, 00:25:29.038 "uuid": "cb6383bf-88b4-4f20-991d-cc391397efbc", 00:25:29.038 "assigned_rate_limits": { 00:25:29.038 "rw_ios_per_sec": 0, 00:25:29.038 "rw_mbytes_per_sec": 0, 00:25:29.038 "r_mbytes_per_sec": 0, 00:25:29.038 "w_mbytes_per_sec": 0 00:25:29.038 }, 00:25:29.038 "claimed": true, 00:25:29.038 "claim_type": "exclusive_write", 00:25:29.038 "zoned": false, 00:25:29.038 "supported_io_types": { 00:25:29.038 "read": true, 00:25:29.038 "write": true, 00:25:29.038 "unmap": true, 00:25:29.038 "flush": true, 00:25:29.038 "reset": true, 00:25:29.038 "nvme_admin": false, 00:25:29.038 "nvme_io": false, 00:25:29.038 "nvme_io_md": false, 00:25:29.038 "write_zeroes": true, 00:25:29.038 "zcopy": true, 00:25:29.038 "get_zone_info": false, 00:25:29.038 "zone_management": false, 00:25:29.038 "zone_append": false, 00:25:29.038 "compare": false, 00:25:29.038 "compare_and_write": false, 00:25:29.038 "abort": true, 00:25:29.038 "seek_hole": false, 00:25:29.038 "seek_data": false, 00:25:29.038 "copy": true, 00:25:29.038 "nvme_iov_md": false 00:25:29.038 }, 00:25:29.038 "memory_domains": [ 00:25:29.038 { 00:25:29.038 "dma_device_id": "system", 00:25:29.038 "dma_device_type": 1 00:25:29.038 }, 00:25:29.038 { 00:25:29.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.038 "dma_device_type": 2 00:25:29.039 } 00:25:29.039 ], 00:25:29.039 "driver_specific": {} 00:25:29.039 } 00:25:29.039 ] 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.039 "name": "Existed_Raid", 00:25:29.039 "uuid": "6aea1781-3e77-449b-b20a-1c9d4772d402", 00:25:29.039 "strip_size_kb": 64, 00:25:29.039 "state": "configuring", 00:25:29.039 "raid_level": "raid5f", 00:25:29.039 "superblock": true, 00:25:29.039 "num_base_bdevs": 3, 00:25:29.039 "num_base_bdevs_discovered": 2, 00:25:29.039 "num_base_bdevs_operational": 3, 00:25:29.039 "base_bdevs_list": [ 00:25:29.039 { 00:25:29.039 "name": "BaseBdev1", 00:25:29.039 "uuid": "5ebf61e9-2372-4a18-97af-4cfa7d4387ab", 00:25:29.039 "is_configured": true, 00:25:29.039 "data_offset": 2048, 00:25:29.039 "data_size": 63488 00:25:29.039 }, 00:25:29.039 { 00:25:29.039 "name": "BaseBdev2", 00:25:29.039 "uuid": "cb6383bf-88b4-4f20-991d-cc391397efbc", 00:25:29.039 "is_configured": true, 00:25:29.039 "data_offset": 2048, 00:25:29.039 "data_size": 63488 00:25:29.039 }, 00:25:29.039 { 00:25:29.039 "name": "BaseBdev3", 00:25:29.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.039 "is_configured": false, 00:25:29.039 "data_offset": 0, 00:25:29.039 "data_size": 0 00:25:29.039 } 00:25:29.039 ] 00:25:29.039 }' 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.039 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.298 [2024-12-09 23:07:04.544745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:29.298 [2024-12-09 23:07:04.545241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:29.298 [2024-12-09 23:07:04.545350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:29.298 [2024-12-09 23:07:04.545649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:29.298 BaseBdev3 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.298 [2024-12-09 23:07:04.549609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:29.298 [2024-12-09 23:07:04.549632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:29.298 [2024-12-09 23:07:04.549825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.298 [ 00:25:29.298 { 00:25:29.298 "name": "BaseBdev3", 00:25:29.298 "aliases": [ 00:25:29.298 "7d31c23f-b806-443a-a867-d064576a195c" 00:25:29.298 ], 00:25:29.298 "product_name": "Malloc disk", 00:25:29.298 "block_size": 512, 00:25:29.298 "num_blocks": 65536, 00:25:29.298 "uuid": "7d31c23f-b806-443a-a867-d064576a195c", 00:25:29.298 "assigned_rate_limits": { 00:25:29.298 "rw_ios_per_sec": 0, 00:25:29.298 "rw_mbytes_per_sec": 0, 00:25:29.298 "r_mbytes_per_sec": 0, 00:25:29.298 "w_mbytes_per_sec": 0 00:25:29.298 }, 00:25:29.298 "claimed": true, 00:25:29.298 "claim_type": "exclusive_write", 00:25:29.298 "zoned": false, 00:25:29.298 "supported_io_types": { 00:25:29.298 "read": true, 00:25:29.298 "write": true, 00:25:29.298 "unmap": true, 00:25:29.298 "flush": true, 00:25:29.298 "reset": true, 00:25:29.298 "nvme_admin": false, 00:25:29.298 "nvme_io": false, 00:25:29.298 "nvme_io_md": false, 00:25:29.298 "write_zeroes": true, 00:25:29.298 "zcopy": true, 00:25:29.298 "get_zone_info": false, 00:25:29.298 "zone_management": false, 00:25:29.298 "zone_append": false, 00:25:29.298 "compare": false, 00:25:29.298 "compare_and_write": false, 00:25:29.298 "abort": true, 00:25:29.298 "seek_hole": false, 00:25:29.298 "seek_data": false, 00:25:29.298 "copy": true, 00:25:29.298 "nvme_iov_md": false 00:25:29.298 }, 00:25:29.298 "memory_domains": [ 00:25:29.298 { 00:25:29.298 "dma_device_id": "system", 00:25:29.298 "dma_device_type": 1 00:25:29.298 }, 00:25:29.298 { 00:25:29.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.298 "dma_device_type": 2 00:25:29.298 } 00:25:29.298 ], 00:25:29.298 "driver_specific": {} 00:25:29.298 } 00:25:29.298 ] 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.298 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.298 "name": "Existed_Raid", 00:25:29.298 "uuid": "6aea1781-3e77-449b-b20a-1c9d4772d402", 00:25:29.298 "strip_size_kb": 64, 00:25:29.298 "state": "online", 00:25:29.298 "raid_level": "raid5f", 00:25:29.298 "superblock": true, 00:25:29.298 "num_base_bdevs": 3, 00:25:29.298 "num_base_bdevs_discovered": 3, 00:25:29.298 "num_base_bdevs_operational": 3, 00:25:29.298 "base_bdevs_list": [ 00:25:29.298 { 00:25:29.298 "name": "BaseBdev1", 00:25:29.298 "uuid": "5ebf61e9-2372-4a18-97af-4cfa7d4387ab", 00:25:29.298 "is_configured": true, 00:25:29.298 "data_offset": 2048, 00:25:29.298 "data_size": 63488 00:25:29.298 }, 00:25:29.298 { 00:25:29.298 "name": "BaseBdev2", 00:25:29.298 "uuid": "cb6383bf-88b4-4f20-991d-cc391397efbc", 00:25:29.298 "is_configured": true, 00:25:29.298 "data_offset": 2048, 00:25:29.298 "data_size": 63488 00:25:29.298 }, 00:25:29.299 { 00:25:29.299 "name": "BaseBdev3", 00:25:29.299 "uuid": "7d31c23f-b806-443a-a867-d064576a195c", 00:25:29.299 "is_configured": true, 00:25:29.299 "data_offset": 2048, 00:25:29.299 "data_size": 63488 00:25:29.299 } 00:25:29.299 ] 00:25:29.299 }' 00:25:29.299 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.299 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.557 [2024-12-09 23:07:04.898313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:29.557 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.818 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:29.818 "name": "Existed_Raid", 00:25:29.818 "aliases": [ 00:25:29.818 "6aea1781-3e77-449b-b20a-1c9d4772d402" 00:25:29.818 ], 00:25:29.818 "product_name": "Raid Volume", 00:25:29.818 "block_size": 512, 00:25:29.818 "num_blocks": 126976, 00:25:29.818 "uuid": "6aea1781-3e77-449b-b20a-1c9d4772d402", 00:25:29.818 "assigned_rate_limits": { 00:25:29.818 "rw_ios_per_sec": 0, 00:25:29.818 "rw_mbytes_per_sec": 0, 00:25:29.818 "r_mbytes_per_sec": 0, 00:25:29.818 "w_mbytes_per_sec": 0 00:25:29.818 }, 00:25:29.818 "claimed": false, 00:25:29.818 "zoned": false, 00:25:29.818 "supported_io_types": { 00:25:29.818 "read": true, 00:25:29.818 "write": true, 00:25:29.818 "unmap": false, 00:25:29.818 "flush": false, 00:25:29.818 "reset": true, 00:25:29.818 "nvme_admin": false, 00:25:29.818 "nvme_io": false, 00:25:29.818 "nvme_io_md": false, 00:25:29.818 "write_zeroes": true, 00:25:29.818 "zcopy": false, 00:25:29.818 "get_zone_info": false, 00:25:29.818 "zone_management": false, 00:25:29.818 "zone_append": false, 00:25:29.818 "compare": false, 00:25:29.818 "compare_and_write": false, 00:25:29.818 "abort": false, 00:25:29.818 "seek_hole": false, 00:25:29.818 "seek_data": false, 00:25:29.818 "copy": false, 00:25:29.818 "nvme_iov_md": false 00:25:29.818 }, 00:25:29.818 "driver_specific": { 00:25:29.818 "raid": { 00:25:29.818 "uuid": "6aea1781-3e77-449b-b20a-1c9d4772d402", 00:25:29.818 "strip_size_kb": 64, 00:25:29.818 "state": "online", 00:25:29.818 "raid_level": "raid5f", 00:25:29.818 "superblock": true, 00:25:29.818 "num_base_bdevs": 3, 00:25:29.818 "num_base_bdevs_discovered": 3, 00:25:29.818 "num_base_bdevs_operational": 3, 00:25:29.818 "base_bdevs_list": [ 00:25:29.818 { 00:25:29.818 "name": "BaseBdev1", 00:25:29.818 "uuid": "5ebf61e9-2372-4a18-97af-4cfa7d4387ab", 00:25:29.818 "is_configured": true, 00:25:29.818 "data_offset": 2048, 00:25:29.818 "data_size": 63488 00:25:29.818 }, 00:25:29.818 { 00:25:29.818 "name": "BaseBdev2", 00:25:29.818 "uuid": "cb6383bf-88b4-4f20-991d-cc391397efbc", 00:25:29.818 "is_configured": true, 00:25:29.818 "data_offset": 2048, 00:25:29.818 "data_size": 63488 00:25:29.818 }, 00:25:29.818 { 00:25:29.818 "name": "BaseBdev3", 00:25:29.818 "uuid": "7d31c23f-b806-443a-a867-d064576a195c", 00:25:29.818 "is_configured": true, 00:25:29.818 "data_offset": 2048, 00:25:29.818 "data_size": 63488 00:25:29.818 } 00:25:29.818 ] 00:25:29.818 } 00:25:29.818 } 00:25:29.818 }' 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:29.819 BaseBdev2 00:25:29.819 BaseBdev3' 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.819 23:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.819 [2024-12-09 23:07:05.086202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.819 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.078 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.078 "name": "Existed_Raid", 00:25:30.078 "uuid": "6aea1781-3e77-449b-b20a-1c9d4772d402", 00:25:30.078 "strip_size_kb": 64, 00:25:30.078 "state": "online", 00:25:30.078 "raid_level": "raid5f", 00:25:30.078 "superblock": true, 00:25:30.078 "num_base_bdevs": 3, 00:25:30.078 "num_base_bdevs_discovered": 2, 00:25:30.078 "num_base_bdevs_operational": 2, 00:25:30.078 "base_bdevs_list": [ 00:25:30.078 { 00:25:30.078 "name": null, 00:25:30.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.078 "is_configured": false, 00:25:30.078 "data_offset": 0, 00:25:30.078 "data_size": 63488 00:25:30.078 }, 00:25:30.078 { 00:25:30.078 "name": "BaseBdev2", 00:25:30.078 "uuid": "cb6383bf-88b4-4f20-991d-cc391397efbc", 00:25:30.078 "is_configured": true, 00:25:30.078 "data_offset": 2048, 00:25:30.078 "data_size": 63488 00:25:30.078 }, 00:25:30.078 { 00:25:30.078 "name": "BaseBdev3", 00:25:30.078 "uuid": "7d31c23f-b806-443a-a867-d064576a195c", 00:25:30.078 "is_configured": true, 00:25:30.078 "data_offset": 2048, 00:25:30.078 "data_size": 63488 00:25:30.078 } 00:25:30.078 ] 00:25:30.078 }' 00:25:30.078 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.078 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.335 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.336 [2024-12-09 23:07:05.507027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:30.336 [2024-12-09 23:07:05.507185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:30.336 [2024-12-09 23:07:05.566886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.336 [2024-12-09 23:07:05.602942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:30.336 [2024-12-09 23:07:05.602989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:30.336 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.594 BaseBdev2 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.594 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.595 [ 00:25:30.595 { 00:25:30.595 "name": "BaseBdev2", 00:25:30.595 "aliases": [ 00:25:30.595 "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5" 00:25:30.595 ], 00:25:30.595 "product_name": "Malloc disk", 00:25:30.595 "block_size": 512, 00:25:30.595 "num_blocks": 65536, 00:25:30.595 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:30.595 "assigned_rate_limits": { 00:25:30.595 "rw_ios_per_sec": 0, 00:25:30.595 "rw_mbytes_per_sec": 0, 00:25:30.595 "r_mbytes_per_sec": 0, 00:25:30.595 "w_mbytes_per_sec": 0 00:25:30.595 }, 00:25:30.595 "claimed": false, 00:25:30.595 "zoned": false, 00:25:30.595 "supported_io_types": { 00:25:30.595 "read": true, 00:25:30.595 "write": true, 00:25:30.595 "unmap": true, 00:25:30.595 "flush": true, 00:25:30.595 "reset": true, 00:25:30.595 "nvme_admin": false, 00:25:30.595 "nvme_io": false, 00:25:30.595 "nvme_io_md": false, 00:25:30.595 "write_zeroes": true, 00:25:30.595 "zcopy": true, 00:25:30.595 "get_zone_info": false, 00:25:30.595 "zone_management": false, 00:25:30.595 "zone_append": false, 00:25:30.595 "compare": false, 00:25:30.595 "compare_and_write": false, 00:25:30.595 "abort": true, 00:25:30.595 "seek_hole": false, 00:25:30.595 "seek_data": false, 00:25:30.595 "copy": true, 00:25:30.595 "nvme_iov_md": false 00:25:30.595 }, 00:25:30.595 "memory_domains": [ 00:25:30.595 { 00:25:30.595 "dma_device_id": "system", 00:25:30.595 "dma_device_type": 1 00:25:30.595 }, 00:25:30.595 { 00:25:30.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.595 "dma_device_type": 2 00:25:30.595 } 00:25:30.595 ], 00:25:30.595 "driver_specific": {} 00:25:30.595 } 00:25:30.595 ] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.595 BaseBdev3 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.595 [ 00:25:30.595 { 00:25:30.595 "name": "BaseBdev3", 00:25:30.595 "aliases": [ 00:25:30.595 "3b5a9206-4de4-4605-8ac8-db8a8231d236" 00:25:30.595 ], 00:25:30.595 "product_name": "Malloc disk", 00:25:30.595 "block_size": 512, 00:25:30.595 "num_blocks": 65536, 00:25:30.595 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:30.595 "assigned_rate_limits": { 00:25:30.595 "rw_ios_per_sec": 0, 00:25:30.595 "rw_mbytes_per_sec": 0, 00:25:30.595 "r_mbytes_per_sec": 0, 00:25:30.595 "w_mbytes_per_sec": 0 00:25:30.595 }, 00:25:30.595 "claimed": false, 00:25:30.595 "zoned": false, 00:25:30.595 "supported_io_types": { 00:25:30.595 "read": true, 00:25:30.595 "write": true, 00:25:30.595 "unmap": true, 00:25:30.595 "flush": true, 00:25:30.595 "reset": true, 00:25:30.595 "nvme_admin": false, 00:25:30.595 "nvme_io": false, 00:25:30.595 "nvme_io_md": false, 00:25:30.595 "write_zeroes": true, 00:25:30.595 "zcopy": true, 00:25:30.595 "get_zone_info": false, 00:25:30.595 "zone_management": false, 00:25:30.595 "zone_append": false, 00:25:30.595 "compare": false, 00:25:30.595 "compare_and_write": false, 00:25:30.595 "abort": true, 00:25:30.595 "seek_hole": false, 00:25:30.595 "seek_data": false, 00:25:30.595 "copy": true, 00:25:30.595 "nvme_iov_md": false 00:25:30.595 }, 00:25:30.595 "memory_domains": [ 00:25:30.595 { 00:25:30.595 "dma_device_id": "system", 00:25:30.595 "dma_device_type": 1 00:25:30.595 }, 00:25:30.595 { 00:25:30.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.595 "dma_device_type": 2 00:25:30.595 } 00:25:30.595 ], 00:25:30.595 "driver_specific": {} 00:25:30.595 } 00:25:30.595 ] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.595 [2024-12-09 23:07:05.813044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:30.595 [2024-12-09 23:07:05.813235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:30.595 [2024-12-09 23:07:05.813311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.595 [2024-12-09 23:07:05.815241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.595 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.596 "name": "Existed_Raid", 00:25:30.596 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:30.596 "strip_size_kb": 64, 00:25:30.596 "state": "configuring", 00:25:30.596 "raid_level": "raid5f", 00:25:30.596 "superblock": true, 00:25:30.596 "num_base_bdevs": 3, 00:25:30.596 "num_base_bdevs_discovered": 2, 00:25:30.596 "num_base_bdevs_operational": 3, 00:25:30.596 "base_bdevs_list": [ 00:25:30.596 { 00:25:30.596 "name": "BaseBdev1", 00:25:30.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.596 "is_configured": false, 00:25:30.596 "data_offset": 0, 00:25:30.596 "data_size": 0 00:25:30.596 }, 00:25:30.596 { 00:25:30.596 "name": "BaseBdev2", 00:25:30.596 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:30.596 "is_configured": true, 00:25:30.596 "data_offset": 2048, 00:25:30.596 "data_size": 63488 00:25:30.596 }, 00:25:30.596 { 00:25:30.596 "name": "BaseBdev3", 00:25:30.596 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:30.596 "is_configured": true, 00:25:30.596 "data_offset": 2048, 00:25:30.596 "data_size": 63488 00:25:30.596 } 00:25:30.596 ] 00:25:30.596 }' 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.596 23:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.859 [2024-12-09 23:07:06.133090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.859 "name": "Existed_Raid", 00:25:30.859 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:30.859 "strip_size_kb": 64, 00:25:30.859 "state": "configuring", 00:25:30.859 "raid_level": "raid5f", 00:25:30.859 "superblock": true, 00:25:30.859 "num_base_bdevs": 3, 00:25:30.859 "num_base_bdevs_discovered": 1, 00:25:30.859 "num_base_bdevs_operational": 3, 00:25:30.859 "base_bdevs_list": [ 00:25:30.859 { 00:25:30.859 "name": "BaseBdev1", 00:25:30.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.859 "is_configured": false, 00:25:30.859 "data_offset": 0, 00:25:30.859 "data_size": 0 00:25:30.859 }, 00:25:30.859 { 00:25:30.859 "name": null, 00:25:30.859 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:30.859 "is_configured": false, 00:25:30.859 "data_offset": 0, 00:25:30.859 "data_size": 63488 00:25:30.859 }, 00:25:30.859 { 00:25:30.859 "name": "BaseBdev3", 00:25:30.859 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:30.859 "is_configured": true, 00:25:30.859 "data_offset": 2048, 00:25:30.859 "data_size": 63488 00:25:30.859 } 00:25:30.859 ] 00:25:30.859 }' 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.859 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.119 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.377 [2024-12-09 23:07:06.488046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.377 BaseBdev1 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.377 [ 00:25:31.377 { 00:25:31.377 "name": "BaseBdev1", 00:25:31.377 "aliases": [ 00:25:31.377 "9cba75df-1dfe-4792-ab4e-5382b292f044" 00:25:31.377 ], 00:25:31.377 "product_name": "Malloc disk", 00:25:31.377 "block_size": 512, 00:25:31.377 "num_blocks": 65536, 00:25:31.377 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:31.377 "assigned_rate_limits": { 00:25:31.377 "rw_ios_per_sec": 0, 00:25:31.377 "rw_mbytes_per_sec": 0, 00:25:31.377 "r_mbytes_per_sec": 0, 00:25:31.377 "w_mbytes_per_sec": 0 00:25:31.377 }, 00:25:31.377 "claimed": true, 00:25:31.377 "claim_type": "exclusive_write", 00:25:31.377 "zoned": false, 00:25:31.377 "supported_io_types": { 00:25:31.377 "read": true, 00:25:31.377 "write": true, 00:25:31.377 "unmap": true, 00:25:31.377 "flush": true, 00:25:31.377 "reset": true, 00:25:31.377 "nvme_admin": false, 00:25:31.377 "nvme_io": false, 00:25:31.377 "nvme_io_md": false, 00:25:31.377 "write_zeroes": true, 00:25:31.377 "zcopy": true, 00:25:31.377 "get_zone_info": false, 00:25:31.377 "zone_management": false, 00:25:31.377 "zone_append": false, 00:25:31.377 "compare": false, 00:25:31.377 "compare_and_write": false, 00:25:31.377 "abort": true, 00:25:31.377 "seek_hole": false, 00:25:31.377 "seek_data": false, 00:25:31.377 "copy": true, 00:25:31.377 "nvme_iov_md": false 00:25:31.377 }, 00:25:31.377 "memory_domains": [ 00:25:31.377 { 00:25:31.377 "dma_device_id": "system", 00:25:31.377 "dma_device_type": 1 00:25:31.377 }, 00:25:31.377 { 00:25:31.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.377 "dma_device_type": 2 00:25:31.377 } 00:25:31.377 ], 00:25:31.377 "driver_specific": {} 00:25:31.377 } 00:25:31.377 ] 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.377 "name": "Existed_Raid", 00:25:31.377 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:31.377 "strip_size_kb": 64, 00:25:31.377 "state": "configuring", 00:25:31.377 "raid_level": "raid5f", 00:25:31.377 "superblock": true, 00:25:31.377 "num_base_bdevs": 3, 00:25:31.377 "num_base_bdevs_discovered": 2, 00:25:31.377 "num_base_bdevs_operational": 3, 00:25:31.377 "base_bdevs_list": [ 00:25:31.377 { 00:25:31.377 "name": "BaseBdev1", 00:25:31.377 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:31.377 "is_configured": true, 00:25:31.377 "data_offset": 2048, 00:25:31.377 "data_size": 63488 00:25:31.377 }, 00:25:31.377 { 00:25:31.377 "name": null, 00:25:31.377 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:31.377 "is_configured": false, 00:25:31.377 "data_offset": 0, 00:25:31.377 "data_size": 63488 00:25:31.377 }, 00:25:31.377 { 00:25:31.377 "name": "BaseBdev3", 00:25:31.377 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:31.377 "is_configured": true, 00:25:31.377 "data_offset": 2048, 00:25:31.377 "data_size": 63488 00:25:31.377 } 00:25:31.377 ] 00:25:31.377 }' 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.377 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.635 [2024-12-09 23:07:06.860180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.635 "name": "Existed_Raid", 00:25:31.635 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:31.635 "strip_size_kb": 64, 00:25:31.635 "state": "configuring", 00:25:31.635 "raid_level": "raid5f", 00:25:31.635 "superblock": true, 00:25:31.635 "num_base_bdevs": 3, 00:25:31.635 "num_base_bdevs_discovered": 1, 00:25:31.635 "num_base_bdevs_operational": 3, 00:25:31.635 "base_bdevs_list": [ 00:25:31.635 { 00:25:31.635 "name": "BaseBdev1", 00:25:31.635 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:31.635 "is_configured": true, 00:25:31.635 "data_offset": 2048, 00:25:31.635 "data_size": 63488 00:25:31.635 }, 00:25:31.635 { 00:25:31.635 "name": null, 00:25:31.635 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:31.635 "is_configured": false, 00:25:31.635 "data_offset": 0, 00:25:31.635 "data_size": 63488 00:25:31.635 }, 00:25:31.635 { 00:25:31.635 "name": null, 00:25:31.635 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:31.635 "is_configured": false, 00:25:31.635 "data_offset": 0, 00:25:31.635 "data_size": 63488 00:25:31.635 } 00:25:31.635 ] 00:25:31.635 }' 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.635 23:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.893 [2024-12-09 23:07:07.228267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:31.893 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.151 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.151 "name": "Existed_Raid", 00:25:32.151 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:32.151 "strip_size_kb": 64, 00:25:32.151 "state": "configuring", 00:25:32.151 "raid_level": "raid5f", 00:25:32.151 "superblock": true, 00:25:32.151 "num_base_bdevs": 3, 00:25:32.151 "num_base_bdevs_discovered": 2, 00:25:32.151 "num_base_bdevs_operational": 3, 00:25:32.151 "base_bdevs_list": [ 00:25:32.151 { 00:25:32.151 "name": "BaseBdev1", 00:25:32.151 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:32.151 "is_configured": true, 00:25:32.151 "data_offset": 2048, 00:25:32.151 "data_size": 63488 00:25:32.151 }, 00:25:32.151 { 00:25:32.151 "name": null, 00:25:32.151 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:32.151 "is_configured": false, 00:25:32.151 "data_offset": 0, 00:25:32.151 "data_size": 63488 00:25:32.151 }, 00:25:32.151 { 00:25:32.151 "name": "BaseBdev3", 00:25:32.151 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:32.151 "is_configured": true, 00:25:32.151 "data_offset": 2048, 00:25:32.151 "data_size": 63488 00:25:32.151 } 00:25:32.151 ] 00:25:32.151 }' 00:25:32.151 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.151 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.408 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.409 [2024-12-09 23:07:07.580338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.409 "name": "Existed_Raid", 00:25:32.409 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:32.409 "strip_size_kb": 64, 00:25:32.409 "state": "configuring", 00:25:32.409 "raid_level": "raid5f", 00:25:32.409 "superblock": true, 00:25:32.409 "num_base_bdevs": 3, 00:25:32.409 "num_base_bdevs_discovered": 1, 00:25:32.409 "num_base_bdevs_operational": 3, 00:25:32.409 "base_bdevs_list": [ 00:25:32.409 { 00:25:32.409 "name": null, 00:25:32.409 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:32.409 "is_configured": false, 00:25:32.409 "data_offset": 0, 00:25:32.409 "data_size": 63488 00:25:32.409 }, 00:25:32.409 { 00:25:32.409 "name": null, 00:25:32.409 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:32.409 "is_configured": false, 00:25:32.409 "data_offset": 0, 00:25:32.409 "data_size": 63488 00:25:32.409 }, 00:25:32.409 { 00:25:32.409 "name": "BaseBdev3", 00:25:32.409 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:32.409 "is_configured": true, 00:25:32.409 "data_offset": 2048, 00:25:32.409 "data_size": 63488 00:25:32.409 } 00:25:32.409 ] 00:25:32.409 }' 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.409 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 [2024-12-09 23:07:07.964405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.669 23:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.669 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.669 "name": "Existed_Raid", 00:25:32.669 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:32.669 "strip_size_kb": 64, 00:25:32.669 "state": "configuring", 00:25:32.669 "raid_level": "raid5f", 00:25:32.669 "superblock": true, 00:25:32.669 "num_base_bdevs": 3, 00:25:32.669 "num_base_bdevs_discovered": 2, 00:25:32.669 "num_base_bdevs_operational": 3, 00:25:32.669 "base_bdevs_list": [ 00:25:32.669 { 00:25:32.669 "name": null, 00:25:32.669 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:32.669 "is_configured": false, 00:25:32.669 "data_offset": 0, 00:25:32.669 "data_size": 63488 00:25:32.669 }, 00:25:32.669 { 00:25:32.669 "name": "BaseBdev2", 00:25:32.669 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:32.669 "is_configured": true, 00:25:32.669 "data_offset": 2048, 00:25:32.669 "data_size": 63488 00:25:32.669 }, 00:25:32.669 { 00:25:32.669 "name": "BaseBdev3", 00:25:32.669 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:32.669 "is_configured": true, 00:25:32.669 "data_offset": 2048, 00:25:32.669 "data_size": 63488 00:25:32.669 } 00:25:32.669 ] 00:25:32.669 }' 00:25:32.669 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.669 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9cba75df-1dfe-4792-ab4e-5382b292f044 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 [2024-12-09 23:07:08.379558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:33.235 NewBaseBdev 00:25:33.235 [2024-12-09 23:07:08.379913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:33.235 [2024-12-09 23:07:08.379932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:33.235 [2024-12-09 23:07:08.380161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 [2024-12-09 23:07:08.383138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:33.235 [2024-12-09 23:07:08.383154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:33.235 [2024-12-09 23:07:08.383275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 [ 00:25:33.235 { 00:25:33.235 "name": "NewBaseBdev", 00:25:33.235 "aliases": [ 00:25:33.235 "9cba75df-1dfe-4792-ab4e-5382b292f044" 00:25:33.235 ], 00:25:33.235 "product_name": "Malloc disk", 00:25:33.235 "block_size": 512, 00:25:33.235 "num_blocks": 65536, 00:25:33.235 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:33.235 "assigned_rate_limits": { 00:25:33.235 "rw_ios_per_sec": 0, 00:25:33.235 "rw_mbytes_per_sec": 0, 00:25:33.235 "r_mbytes_per_sec": 0, 00:25:33.235 "w_mbytes_per_sec": 0 00:25:33.235 }, 00:25:33.235 "claimed": true, 00:25:33.235 "claim_type": "exclusive_write", 00:25:33.235 "zoned": false, 00:25:33.235 "supported_io_types": { 00:25:33.235 "read": true, 00:25:33.235 "write": true, 00:25:33.235 "unmap": true, 00:25:33.235 "flush": true, 00:25:33.235 "reset": true, 00:25:33.235 "nvme_admin": false, 00:25:33.235 "nvme_io": false, 00:25:33.235 "nvme_io_md": false, 00:25:33.235 "write_zeroes": true, 00:25:33.235 "zcopy": true, 00:25:33.235 "get_zone_info": false, 00:25:33.235 "zone_management": false, 00:25:33.235 "zone_append": false, 00:25:33.235 "compare": false, 00:25:33.235 "compare_and_write": false, 00:25:33.235 "abort": true, 00:25:33.235 "seek_hole": false, 00:25:33.235 "seek_data": false, 00:25:33.235 "copy": true, 00:25:33.235 "nvme_iov_md": false 00:25:33.235 }, 00:25:33.235 "memory_domains": [ 00:25:33.235 { 00:25:33.235 "dma_device_id": "system", 00:25:33.235 "dma_device_type": 1 00:25:33.235 }, 00:25:33.235 { 00:25:33.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.235 "dma_device_type": 2 00:25:33.235 } 00:25:33.235 ], 00:25:33.235 "driver_specific": {} 00:25:33.235 } 00:25:33.235 ] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.235 "name": "Existed_Raid", 00:25:33.235 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:33.235 "strip_size_kb": 64, 00:25:33.235 "state": "online", 00:25:33.235 "raid_level": "raid5f", 00:25:33.235 "superblock": true, 00:25:33.235 "num_base_bdevs": 3, 00:25:33.235 "num_base_bdevs_discovered": 3, 00:25:33.235 "num_base_bdevs_operational": 3, 00:25:33.235 "base_bdevs_list": [ 00:25:33.235 { 00:25:33.235 "name": "NewBaseBdev", 00:25:33.235 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:33.235 "is_configured": true, 00:25:33.235 "data_offset": 2048, 00:25:33.235 "data_size": 63488 00:25:33.235 }, 00:25:33.235 { 00:25:33.235 "name": "BaseBdev2", 00:25:33.235 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:33.235 "is_configured": true, 00:25:33.235 "data_offset": 2048, 00:25:33.235 "data_size": 63488 00:25:33.235 }, 00:25:33.235 { 00:25:33.235 "name": "BaseBdev3", 00:25:33.235 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:33.235 "is_configured": true, 00:25:33.235 "data_offset": 2048, 00:25:33.235 "data_size": 63488 00:25:33.235 } 00:25:33.235 ] 00:25:33.235 }' 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.235 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.530 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.531 [2024-12-09 23:07:08.706873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:33.531 "name": "Existed_Raid", 00:25:33.531 "aliases": [ 00:25:33.531 "18ca60a1-561b-4518-af1e-2f488069964c" 00:25:33.531 ], 00:25:33.531 "product_name": "Raid Volume", 00:25:33.531 "block_size": 512, 00:25:33.531 "num_blocks": 126976, 00:25:33.531 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:33.531 "assigned_rate_limits": { 00:25:33.531 "rw_ios_per_sec": 0, 00:25:33.531 "rw_mbytes_per_sec": 0, 00:25:33.531 "r_mbytes_per_sec": 0, 00:25:33.531 "w_mbytes_per_sec": 0 00:25:33.531 }, 00:25:33.531 "claimed": false, 00:25:33.531 "zoned": false, 00:25:33.531 "supported_io_types": { 00:25:33.531 "read": true, 00:25:33.531 "write": true, 00:25:33.531 "unmap": false, 00:25:33.531 "flush": false, 00:25:33.531 "reset": true, 00:25:33.531 "nvme_admin": false, 00:25:33.531 "nvme_io": false, 00:25:33.531 "nvme_io_md": false, 00:25:33.531 "write_zeroes": true, 00:25:33.531 "zcopy": false, 00:25:33.531 "get_zone_info": false, 00:25:33.531 "zone_management": false, 00:25:33.531 "zone_append": false, 00:25:33.531 "compare": false, 00:25:33.531 "compare_and_write": false, 00:25:33.531 "abort": false, 00:25:33.531 "seek_hole": false, 00:25:33.531 "seek_data": false, 00:25:33.531 "copy": false, 00:25:33.531 "nvme_iov_md": false 00:25:33.531 }, 00:25:33.531 "driver_specific": { 00:25:33.531 "raid": { 00:25:33.531 "uuid": "18ca60a1-561b-4518-af1e-2f488069964c", 00:25:33.531 "strip_size_kb": 64, 00:25:33.531 "state": "online", 00:25:33.531 "raid_level": "raid5f", 00:25:33.531 "superblock": true, 00:25:33.531 "num_base_bdevs": 3, 00:25:33.531 "num_base_bdevs_discovered": 3, 00:25:33.531 "num_base_bdevs_operational": 3, 00:25:33.531 "base_bdevs_list": [ 00:25:33.531 { 00:25:33.531 "name": "NewBaseBdev", 00:25:33.531 "uuid": "9cba75df-1dfe-4792-ab4e-5382b292f044", 00:25:33.531 "is_configured": true, 00:25:33.531 "data_offset": 2048, 00:25:33.531 "data_size": 63488 00:25:33.531 }, 00:25:33.531 { 00:25:33.531 "name": "BaseBdev2", 00:25:33.531 "uuid": "ccd7dbdc-9faf-4953-8db5-a8edfec68cc5", 00:25:33.531 "is_configured": true, 00:25:33.531 "data_offset": 2048, 00:25:33.531 "data_size": 63488 00:25:33.531 }, 00:25:33.531 { 00:25:33.531 "name": "BaseBdev3", 00:25:33.531 "uuid": "3b5a9206-4de4-4605-8ac8-db8a8231d236", 00:25:33.531 "is_configured": true, 00:25:33.531 "data_offset": 2048, 00:25:33.531 "data_size": 63488 00:25:33.531 } 00:25:33.531 ] 00:25:33.531 } 00:25:33.531 } 00:25:33.531 }' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:33.531 BaseBdev2 00:25:33.531 BaseBdev3' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:33.531 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:33.789 [2024-12-09 23:07:08.926739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:33.789 [2024-12-09 23:07:08.926763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.789 [2024-12-09 23:07:08.926826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.789 [2024-12-09 23:07:08.927055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.789 [2024-12-09 23:07:08.927065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78310 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78310 ']' 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78310 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78310 00:25:33.789 killing process with pid 78310 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78310' 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78310 00:25:33.789 [2024-12-09 23:07:08.957021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.789 23:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78310 00:25:33.789 [2024-12-09 23:07:09.110053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.355 23:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:34.356 00:25:34.356 real 0m7.558s 00:25:34.356 user 0m12.086s 00:25:34.356 sys 0m1.304s 00:25:34.356 23:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.356 ************************************ 00:25:34.356 END TEST raid5f_state_function_test_sb 00:25:34.356 23:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.356 ************************************ 00:25:34.612 23:07:09 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:25:34.612 23:07:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:34.612 23:07:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.612 23:07:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.612 ************************************ 00:25:34.612 START TEST raid5f_superblock_test 00:25:34.612 ************************************ 00:25:34.612 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:25:34.612 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:25:34.612 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:34.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78903 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78903 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78903 ']' 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.613 23:07:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.613 [2024-12-09 23:07:09.807668] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:25:34.613 [2024-12-09 23:07:09.807954] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78903 ] 00:25:34.613 [2024-12-09 23:07:09.957241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.870 [2024-12-09 23:07:10.051785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.870 [2024-12-09 23:07:10.165284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:34.870 [2024-12-09 23:07:10.165460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.435 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.436 malloc1 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.436 [2024-12-09 23:07:10.783686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:35.436 [2024-12-09 23:07:10.783753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.436 [2024-12-09 23:07:10.783773] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:35.436 [2024-12-09 23:07:10.783781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.436 [2024-12-09 23:07:10.785718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.436 [2024-12-09 23:07:10.785870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:35.436 pt1 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.436 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.700 malloc2 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.700 [2024-12-09 23:07:10.824157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:35.700 [2024-12-09 23:07:10.824381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.700 [2024-12-09 23:07:10.824410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:35.700 [2024-12-09 23:07:10.824417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.700 [2024-12-09 23:07:10.826327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.700 [2024-12-09 23:07:10.826363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:35.700 pt2 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.700 malloc3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.700 [2024-12-09 23:07:10.874406] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:35.700 [2024-12-09 23:07:10.874475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.700 [2024-12-09 23:07:10.874496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:35.700 [2024-12-09 23:07:10.874504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.700 [2024-12-09 23:07:10.876374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.700 [2024-12-09 23:07:10.876412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:35.700 pt3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.700 [2024-12-09 23:07:10.882445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:35.700 [2024-12-09 23:07:10.884047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:35.700 [2024-12-09 23:07:10.884243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:35.700 [2024-12-09 23:07:10.884418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:35.700 [2024-12-09 23:07:10.884455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:35.700 [2024-12-09 23:07:10.884685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:35.700 [2024-12-09 23:07:10.887788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:35.700 [2024-12-09 23:07:10.887891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:35.700 [2024-12-09 23:07:10.888119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:35.700 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.701 "name": "raid_bdev1", 00:25:35.701 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:35.701 "strip_size_kb": 64, 00:25:35.701 "state": "online", 00:25:35.701 "raid_level": "raid5f", 00:25:35.701 "superblock": true, 00:25:35.701 "num_base_bdevs": 3, 00:25:35.701 "num_base_bdevs_discovered": 3, 00:25:35.701 "num_base_bdevs_operational": 3, 00:25:35.701 "base_bdevs_list": [ 00:25:35.701 { 00:25:35.701 "name": "pt1", 00:25:35.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.701 "is_configured": true, 00:25:35.701 "data_offset": 2048, 00:25:35.701 "data_size": 63488 00:25:35.701 }, 00:25:35.701 { 00:25:35.701 "name": "pt2", 00:25:35.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.701 "is_configured": true, 00:25:35.701 "data_offset": 2048, 00:25:35.701 "data_size": 63488 00:25:35.701 }, 00:25:35.701 { 00:25:35.701 "name": "pt3", 00:25:35.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:35.701 "is_configured": true, 00:25:35.701 "data_offset": 2048, 00:25:35.701 "data_size": 63488 00:25:35.701 } 00:25:35.701 ] 00:25:35.701 }' 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.701 23:07:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:35.964 [2024-12-09 23:07:11.216325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:35.964 "name": "raid_bdev1", 00:25:35.964 "aliases": [ 00:25:35.964 "a23d58ae-e651-47ec-bedb-fb82c47eb3f0" 00:25:35.964 ], 00:25:35.964 "product_name": "Raid Volume", 00:25:35.964 "block_size": 512, 00:25:35.964 "num_blocks": 126976, 00:25:35.964 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:35.964 "assigned_rate_limits": { 00:25:35.964 "rw_ios_per_sec": 0, 00:25:35.964 "rw_mbytes_per_sec": 0, 00:25:35.964 "r_mbytes_per_sec": 0, 00:25:35.964 "w_mbytes_per_sec": 0 00:25:35.964 }, 00:25:35.964 "claimed": false, 00:25:35.964 "zoned": false, 00:25:35.964 "supported_io_types": { 00:25:35.964 "read": true, 00:25:35.964 "write": true, 00:25:35.964 "unmap": false, 00:25:35.964 "flush": false, 00:25:35.964 "reset": true, 00:25:35.964 "nvme_admin": false, 00:25:35.964 "nvme_io": false, 00:25:35.964 "nvme_io_md": false, 00:25:35.964 "write_zeroes": true, 00:25:35.964 "zcopy": false, 00:25:35.964 "get_zone_info": false, 00:25:35.964 "zone_management": false, 00:25:35.964 "zone_append": false, 00:25:35.964 "compare": false, 00:25:35.964 "compare_and_write": false, 00:25:35.964 "abort": false, 00:25:35.964 "seek_hole": false, 00:25:35.964 "seek_data": false, 00:25:35.964 "copy": false, 00:25:35.964 "nvme_iov_md": false 00:25:35.964 }, 00:25:35.964 "driver_specific": { 00:25:35.964 "raid": { 00:25:35.964 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:35.964 "strip_size_kb": 64, 00:25:35.964 "state": "online", 00:25:35.964 "raid_level": "raid5f", 00:25:35.964 "superblock": true, 00:25:35.964 "num_base_bdevs": 3, 00:25:35.964 "num_base_bdevs_discovered": 3, 00:25:35.964 "num_base_bdevs_operational": 3, 00:25:35.964 "base_bdevs_list": [ 00:25:35.964 { 00:25:35.964 "name": "pt1", 00:25:35.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:35.964 "is_configured": true, 00:25:35.964 "data_offset": 2048, 00:25:35.964 "data_size": 63488 00:25:35.964 }, 00:25:35.964 { 00:25:35.964 "name": "pt2", 00:25:35.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:35.964 "is_configured": true, 00:25:35.964 "data_offset": 2048, 00:25:35.964 "data_size": 63488 00:25:35.964 }, 00:25:35.964 { 00:25:35.964 "name": "pt3", 00:25:35.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:35.964 "is_configured": true, 00:25:35.964 "data_offset": 2048, 00:25:35.964 "data_size": 63488 00:25:35.964 } 00:25:35.964 ] 00:25:35.964 } 00:25:35.964 } 00:25:35.964 }' 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:35.964 pt2 00:25:35.964 pt3' 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.964 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:36.228 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 [2024-12-09 23:07:11.432325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a23d58ae-e651-47ec-bedb-fb82c47eb3f0 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a23d58ae-e651-47ec-bedb-fb82c47eb3f0 ']' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 [2024-12-09 23:07:11.464169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.229 [2024-12-09 23:07:11.464198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.229 [2024-12-09 23:07:11.464263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.229 [2024-12-09 23:07:11.464328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.229 [2024-12-09 23:07:11.464336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 [2024-12-09 23:07:11.560213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:36.229 [2024-12-09 23:07:11.561820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:36.229 [2024-12-09 23:07:11.561859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:36.229 [2024-12-09 23:07:11.561901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:36.229 [2024-12-09 23:07:11.561943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:36.229 [2024-12-09 23:07:11.561958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:36.229 [2024-12-09 23:07:11.561972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:36.229 [2024-12-09 23:07:11.561979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:36.229 request: 00:25:36.229 { 00:25:36.229 "name": "raid_bdev1", 00:25:36.229 "raid_level": "raid5f", 00:25:36.229 "base_bdevs": [ 00:25:36.229 "malloc1", 00:25:36.229 "malloc2", 00:25:36.229 "malloc3" 00:25:36.229 ], 00:25:36.229 "strip_size_kb": 64, 00:25:36.229 "superblock": false, 00:25:36.229 "method": "bdev_raid_create", 00:25:36.229 "req_id": 1 00:25:36.229 } 00:25:36.229 Got JSON-RPC error response 00:25:36.229 response: 00:25:36.229 { 00:25:36.229 "code": -17, 00:25:36.229 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:36.229 } 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:36.229 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.489 [2024-12-09 23:07:11.604189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:36.489 [2024-12-09 23:07:11.604240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.489 [2024-12-09 23:07:11.604258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:36.489 [2024-12-09 23:07:11.604265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.489 [2024-12-09 23:07:11.606159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.489 [2024-12-09 23:07:11.606191] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:36.489 [2024-12-09 23:07:11.606268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:36.489 [2024-12-09 23:07:11.606312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:36.489 pt1 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.489 "name": "raid_bdev1", 00:25:36.489 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:36.489 "strip_size_kb": 64, 00:25:36.489 "state": "configuring", 00:25:36.489 "raid_level": "raid5f", 00:25:36.489 "superblock": true, 00:25:36.489 "num_base_bdevs": 3, 00:25:36.489 "num_base_bdevs_discovered": 1, 00:25:36.489 "num_base_bdevs_operational": 3, 00:25:36.489 "base_bdevs_list": [ 00:25:36.489 { 00:25:36.489 "name": "pt1", 00:25:36.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.489 "is_configured": true, 00:25:36.489 "data_offset": 2048, 00:25:36.489 "data_size": 63488 00:25:36.489 }, 00:25:36.489 { 00:25:36.489 "name": null, 00:25:36.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.489 "is_configured": false, 00:25:36.489 "data_offset": 2048, 00:25:36.489 "data_size": 63488 00:25:36.489 }, 00:25:36.489 { 00:25:36.489 "name": null, 00:25:36.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:36.489 "is_configured": false, 00:25:36.489 "data_offset": 2048, 00:25:36.489 "data_size": 63488 00:25:36.489 } 00:25:36.489 ] 00:25:36.489 }' 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.489 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.749 [2024-12-09 23:07:11.976269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:36.749 [2024-12-09 23:07:11.976324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.749 [2024-12-09 23:07:11.976341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:36.749 [2024-12-09 23:07:11.976348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.749 [2024-12-09 23:07:11.976708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.749 [2024-12-09 23:07:11.976724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:36.749 [2024-12-09 23:07:11.976788] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:36.749 [2024-12-09 23:07:11.976808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:36.749 pt2 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.749 [2024-12-09 23:07:11.984278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.749 23:07:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.749 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.749 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.749 "name": "raid_bdev1", 00:25:36.749 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:36.749 "strip_size_kb": 64, 00:25:36.749 "state": "configuring", 00:25:36.749 "raid_level": "raid5f", 00:25:36.749 "superblock": true, 00:25:36.749 "num_base_bdevs": 3, 00:25:36.749 "num_base_bdevs_discovered": 1, 00:25:36.749 "num_base_bdevs_operational": 3, 00:25:36.749 "base_bdevs_list": [ 00:25:36.749 { 00:25:36.749 "name": "pt1", 00:25:36.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:36.749 "is_configured": true, 00:25:36.749 "data_offset": 2048, 00:25:36.749 "data_size": 63488 00:25:36.749 }, 00:25:36.749 { 00:25:36.749 "name": null, 00:25:36.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:36.749 "is_configured": false, 00:25:36.749 "data_offset": 0, 00:25:36.749 "data_size": 63488 00:25:36.749 }, 00:25:36.749 { 00:25:36.749 "name": null, 00:25:36.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:36.749 "is_configured": false, 00:25:36.749 "data_offset": 2048, 00:25:36.749 "data_size": 63488 00:25:36.749 } 00:25:36.749 ] 00:25:36.749 }' 00:25:36.749 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.749 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.007 [2024-12-09 23:07:12.308317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:37.007 [2024-12-09 23:07:12.308384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.007 [2024-12-09 23:07:12.308400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:37.007 [2024-12-09 23:07:12.308410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.007 [2024-12-09 23:07:12.308804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.007 [2024-12-09 23:07:12.308829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:37.007 [2024-12-09 23:07:12.308894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:37.007 [2024-12-09 23:07:12.308913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.007 pt2 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.007 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.008 [2024-12-09 23:07:12.316332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:37.008 [2024-12-09 23:07:12.316382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.008 [2024-12-09 23:07:12.316395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:37.008 [2024-12-09 23:07:12.316404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.008 [2024-12-09 23:07:12.316762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.008 [2024-12-09 23:07:12.316788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:37.008 [2024-12-09 23:07:12.316849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:37.008 [2024-12-09 23:07:12.316866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:37.008 [2024-12-09 23:07:12.316972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:37.008 [2024-12-09 23:07:12.316986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:37.008 [2024-12-09 23:07:12.317205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:37.008 [2024-12-09 23:07:12.320065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:37.008 pt3 00:25:37.008 [2024-12-09 23:07:12.320206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:37.008 [2024-12-09 23:07:12.320403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.008 "name": "raid_bdev1", 00:25:37.008 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:37.008 "strip_size_kb": 64, 00:25:37.008 "state": "online", 00:25:37.008 "raid_level": "raid5f", 00:25:37.008 "superblock": true, 00:25:37.008 "num_base_bdevs": 3, 00:25:37.008 "num_base_bdevs_discovered": 3, 00:25:37.008 "num_base_bdevs_operational": 3, 00:25:37.008 "base_bdevs_list": [ 00:25:37.008 { 00:25:37.008 "name": "pt1", 00:25:37.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:37.008 "is_configured": true, 00:25:37.008 "data_offset": 2048, 00:25:37.008 "data_size": 63488 00:25:37.008 }, 00:25:37.008 { 00:25:37.008 "name": "pt2", 00:25:37.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.008 "is_configured": true, 00:25:37.008 "data_offset": 2048, 00:25:37.008 "data_size": 63488 00:25:37.008 }, 00:25:37.008 { 00:25:37.008 "name": "pt3", 00:25:37.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.008 "is_configured": true, 00:25:37.008 "data_offset": 2048, 00:25:37.008 "data_size": 63488 00:25:37.008 } 00:25:37.008 ] 00:25:37.008 }' 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.008 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.578 [2024-12-09 23:07:12.656600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.578 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:37.578 "name": "raid_bdev1", 00:25:37.578 "aliases": [ 00:25:37.578 "a23d58ae-e651-47ec-bedb-fb82c47eb3f0" 00:25:37.578 ], 00:25:37.578 "product_name": "Raid Volume", 00:25:37.578 "block_size": 512, 00:25:37.578 "num_blocks": 126976, 00:25:37.578 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:37.578 "assigned_rate_limits": { 00:25:37.578 "rw_ios_per_sec": 0, 00:25:37.578 "rw_mbytes_per_sec": 0, 00:25:37.578 "r_mbytes_per_sec": 0, 00:25:37.578 "w_mbytes_per_sec": 0 00:25:37.578 }, 00:25:37.578 "claimed": false, 00:25:37.578 "zoned": false, 00:25:37.578 "supported_io_types": { 00:25:37.578 "read": true, 00:25:37.578 "write": true, 00:25:37.578 "unmap": false, 00:25:37.578 "flush": false, 00:25:37.578 "reset": true, 00:25:37.578 "nvme_admin": false, 00:25:37.578 "nvme_io": false, 00:25:37.578 "nvme_io_md": false, 00:25:37.578 "write_zeroes": true, 00:25:37.578 "zcopy": false, 00:25:37.578 "get_zone_info": false, 00:25:37.578 "zone_management": false, 00:25:37.578 "zone_append": false, 00:25:37.578 "compare": false, 00:25:37.578 "compare_and_write": false, 00:25:37.578 "abort": false, 00:25:37.578 "seek_hole": false, 00:25:37.578 "seek_data": false, 00:25:37.578 "copy": false, 00:25:37.578 "nvme_iov_md": false 00:25:37.578 }, 00:25:37.578 "driver_specific": { 00:25:37.578 "raid": { 00:25:37.578 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:37.578 "strip_size_kb": 64, 00:25:37.578 "state": "online", 00:25:37.578 "raid_level": "raid5f", 00:25:37.578 "superblock": true, 00:25:37.578 "num_base_bdevs": 3, 00:25:37.578 "num_base_bdevs_discovered": 3, 00:25:37.578 "num_base_bdevs_operational": 3, 00:25:37.578 "base_bdevs_list": [ 00:25:37.578 { 00:25:37.578 "name": "pt1", 00:25:37.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:37.578 "is_configured": true, 00:25:37.578 "data_offset": 2048, 00:25:37.578 "data_size": 63488 00:25:37.578 }, 00:25:37.578 { 00:25:37.578 "name": "pt2", 00:25:37.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.579 "is_configured": true, 00:25:37.579 "data_offset": 2048, 00:25:37.579 "data_size": 63488 00:25:37.579 }, 00:25:37.579 { 00:25:37.579 "name": "pt3", 00:25:37.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.579 "is_configured": true, 00:25:37.579 "data_offset": 2048, 00:25:37.579 "data_size": 63488 00:25:37.579 } 00:25:37.579 ] 00:25:37.579 } 00:25:37.579 } 00:25:37.579 }' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:37.579 pt2 00:25:37.579 pt3' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:37.579 [2024-12-09 23:07:12.864625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a23d58ae-e651-47ec-bedb-fb82c47eb3f0 '!=' a23d58ae-e651-47ec-bedb-fb82c47eb3f0 ']' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.579 [2024-12-09 23:07:12.896517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.579 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.857 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.857 "name": "raid_bdev1", 00:25:37.857 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:37.857 "strip_size_kb": 64, 00:25:37.857 "state": "online", 00:25:37.857 "raid_level": "raid5f", 00:25:37.857 "superblock": true, 00:25:37.857 "num_base_bdevs": 3, 00:25:37.857 "num_base_bdevs_discovered": 2, 00:25:37.857 "num_base_bdevs_operational": 2, 00:25:37.857 "base_bdevs_list": [ 00:25:37.857 { 00:25:37.857 "name": null, 00:25:37.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.857 "is_configured": false, 00:25:37.857 "data_offset": 0, 00:25:37.857 "data_size": 63488 00:25:37.857 }, 00:25:37.857 { 00:25:37.857 "name": "pt2", 00:25:37.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:37.857 "is_configured": true, 00:25:37.857 "data_offset": 2048, 00:25:37.857 "data_size": 63488 00:25:37.857 }, 00:25:37.857 { 00:25:37.857 "name": "pt3", 00:25:37.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:37.857 "is_configured": true, 00:25:37.857 "data_offset": 2048, 00:25:37.857 "data_size": 63488 00:25:37.857 } 00:25:37.857 ] 00:25:37.857 }' 00:25:37.857 23:07:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.857 23:07:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 [2024-12-09 23:07:13.228539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.117 [2024-12-09 23:07:13.228573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:38.117 [2024-12-09 23:07:13.228635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.117 [2024-12-09 23:07:13.228687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.117 [2024-12-09 23:07:13.228698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.117 [2024-12-09 23:07:13.280522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:38.117 [2024-12-09 23:07:13.280582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.117 [2024-12-09 23:07:13.280596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:38.117 [2024-12-09 23:07:13.280630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.117 [2024-12-09 23:07:13.282523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.117 [2024-12-09 23:07:13.282561] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:38.117 [2024-12-09 23:07:13.282631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:38.117 [2024-12-09 23:07:13.282668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:38.117 pt2 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:38.117 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.118 "name": "raid_bdev1", 00:25:38.118 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:38.118 "strip_size_kb": 64, 00:25:38.118 "state": "configuring", 00:25:38.118 "raid_level": "raid5f", 00:25:38.118 "superblock": true, 00:25:38.118 "num_base_bdevs": 3, 00:25:38.118 "num_base_bdevs_discovered": 1, 00:25:38.118 "num_base_bdevs_operational": 2, 00:25:38.118 "base_bdevs_list": [ 00:25:38.118 { 00:25:38.118 "name": null, 00:25:38.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.118 "is_configured": false, 00:25:38.118 "data_offset": 2048, 00:25:38.118 "data_size": 63488 00:25:38.118 }, 00:25:38.118 { 00:25:38.118 "name": "pt2", 00:25:38.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:38.118 "is_configured": true, 00:25:38.118 "data_offset": 2048, 00:25:38.118 "data_size": 63488 00:25:38.118 }, 00:25:38.118 { 00:25:38.118 "name": null, 00:25:38.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:38.118 "is_configured": false, 00:25:38.118 "data_offset": 2048, 00:25:38.118 "data_size": 63488 00:25:38.118 } 00:25:38.118 ] 00:25:38.118 }' 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.118 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.377 [2024-12-09 23:07:13.620588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:38.377 [2024-12-09 23:07:13.620654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.377 [2024-12-09 23:07:13.620670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:38.377 [2024-12-09 23:07:13.620678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.377 [2024-12-09 23:07:13.621054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.377 [2024-12-09 23:07:13.621068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:38.377 [2024-12-09 23:07:13.621148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:38.377 [2024-12-09 23:07:13.621169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:38.377 [2024-12-09 23:07:13.621261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:38.377 [2024-12-09 23:07:13.621271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:38.377 [2024-12-09 23:07:13.621478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:38.377 [2024-12-09 23:07:13.624433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:38.377 [2024-12-09 23:07:13.624467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:38.377 [2024-12-09 23:07:13.624715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:38.377 pt3 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.377 "name": "raid_bdev1", 00:25:38.377 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:38.377 "strip_size_kb": 64, 00:25:38.377 "state": "online", 00:25:38.377 "raid_level": "raid5f", 00:25:38.377 "superblock": true, 00:25:38.377 "num_base_bdevs": 3, 00:25:38.377 "num_base_bdevs_discovered": 2, 00:25:38.377 "num_base_bdevs_operational": 2, 00:25:38.377 "base_bdevs_list": [ 00:25:38.377 { 00:25:38.377 "name": null, 00:25:38.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.377 "is_configured": false, 00:25:38.377 "data_offset": 2048, 00:25:38.377 "data_size": 63488 00:25:38.377 }, 00:25:38.377 { 00:25:38.377 "name": "pt2", 00:25:38.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:38.377 "is_configured": true, 00:25:38.377 "data_offset": 2048, 00:25:38.377 "data_size": 63488 00:25:38.377 }, 00:25:38.377 { 00:25:38.377 "name": "pt3", 00:25:38.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:38.377 "is_configured": true, 00:25:38.377 "data_offset": 2048, 00:25:38.377 "data_size": 63488 00:25:38.377 } 00:25:38.377 ] 00:25:38.377 }' 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.377 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.636 [2024-12-09 23:07:13.984722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.636 [2024-12-09 23:07:13.984752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:38.636 [2024-12-09 23:07:13.984811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.636 [2024-12-09 23:07:13.984865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.636 [2024-12-09 23:07:13.984873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.636 23:07:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.896 [2024-12-09 23:07:14.036759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:38.896 [2024-12-09 23:07:14.036814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.896 [2024-12-09 23:07:14.036830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:38.896 [2024-12-09 23:07:14.036838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.896 [2024-12-09 23:07:14.038729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.896 [2024-12-09 23:07:14.038767] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:38.896 [2024-12-09 23:07:14.038839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:38.896 [2024-12-09 23:07:14.038875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:38.896 [2024-12-09 23:07:14.038985] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:38.896 [2024-12-09 23:07:14.038994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.896 [2024-12-09 23:07:14.039007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:38.896 [2024-12-09 23:07:14.039046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:38.896 pt1 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.896 "name": "raid_bdev1", 00:25:38.896 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:38.896 "strip_size_kb": 64, 00:25:38.896 "state": "configuring", 00:25:38.896 "raid_level": "raid5f", 00:25:38.896 "superblock": true, 00:25:38.896 "num_base_bdevs": 3, 00:25:38.896 "num_base_bdevs_discovered": 1, 00:25:38.896 "num_base_bdevs_operational": 2, 00:25:38.896 "base_bdevs_list": [ 00:25:38.896 { 00:25:38.896 "name": null, 00:25:38.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.896 "is_configured": false, 00:25:38.896 "data_offset": 2048, 00:25:38.896 "data_size": 63488 00:25:38.896 }, 00:25:38.896 { 00:25:38.896 "name": "pt2", 00:25:38.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:38.896 "is_configured": true, 00:25:38.896 "data_offset": 2048, 00:25:38.896 "data_size": 63488 00:25:38.896 }, 00:25:38.896 { 00:25:38.896 "name": null, 00:25:38.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:38.896 "is_configured": false, 00:25:38.896 "data_offset": 2048, 00:25:38.896 "data_size": 63488 00:25:38.896 } 00:25:38.896 ] 00:25:38.896 }' 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.896 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 [2024-12-09 23:07:14.416846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:39.154 [2024-12-09 23:07:14.416906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.154 [2024-12-09 23:07:14.416925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:39.154 [2024-12-09 23:07:14.416933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.154 [2024-12-09 23:07:14.417343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.154 [2024-12-09 23:07:14.417362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:39.154 [2024-12-09 23:07:14.417428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:39.154 [2024-12-09 23:07:14.417446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:39.154 [2024-12-09 23:07:14.417543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:39.154 [2024-12-09 23:07:14.417551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:39.154 [2024-12-09 23:07:14.417753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:39.154 [2024-12-09 23:07:14.420599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:39.154 [2024-12-09 23:07:14.420620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:39.154 [2024-12-09 23:07:14.420824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.154 pt3 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.154 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:39.154 "name": "raid_bdev1", 00:25:39.154 "uuid": "a23d58ae-e651-47ec-bedb-fb82c47eb3f0", 00:25:39.154 "strip_size_kb": 64, 00:25:39.154 "state": "online", 00:25:39.154 "raid_level": "raid5f", 00:25:39.154 "superblock": true, 00:25:39.154 "num_base_bdevs": 3, 00:25:39.154 "num_base_bdevs_discovered": 2, 00:25:39.154 "num_base_bdevs_operational": 2, 00:25:39.154 "base_bdevs_list": [ 00:25:39.154 { 00:25:39.154 "name": null, 00:25:39.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.154 "is_configured": false, 00:25:39.154 "data_offset": 2048, 00:25:39.154 "data_size": 63488 00:25:39.154 }, 00:25:39.154 { 00:25:39.154 "name": "pt2", 00:25:39.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:39.155 "is_configured": true, 00:25:39.155 "data_offset": 2048, 00:25:39.155 "data_size": 63488 00:25:39.155 }, 00:25:39.155 { 00:25:39.155 "name": "pt3", 00:25:39.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:39.155 "is_configured": true, 00:25:39.155 "data_offset": 2048, 00:25:39.155 "data_size": 63488 00:25:39.155 } 00:25:39.155 ] 00:25:39.155 }' 00:25:39.155 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:39.155 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.412 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:39.412 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:39.412 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.412 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.412 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.669 [2024-12-09 23:07:14.793122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a23d58ae-e651-47ec-bedb-fb82c47eb3f0 '!=' a23d58ae-e651-47ec-bedb-fb82c47eb3f0 ']' 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78903 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78903 ']' 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78903 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78903 00:25:39.669 killing process with pid 78903 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78903' 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 78903 00:25:39.669 [2024-12-09 23:07:14.842842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:39.669 23:07:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 78903 00:25:39.669 [2024-12-09 23:07:14.842922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:39.669 [2024-12-09 23:07:14.842975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:39.669 [2024-12-09 23:07:14.842985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:39.669 [2024-12-09 23:07:14.995910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.239 23:07:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:40.239 00:25:40.239 real 0m5.838s 00:25:40.239 user 0m9.304s 00:25:40.239 sys 0m0.968s 00:25:40.239 23:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.239 ************************************ 00:25:40.239 END TEST raid5f_superblock_test 00:25:40.239 ************************************ 00:25:40.239 23:07:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.497 23:07:15 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:25:40.497 23:07:15 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:25:40.497 23:07:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:40.497 23:07:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.497 23:07:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:40.497 ************************************ 00:25:40.497 START TEST raid5f_rebuild_test 00:25:40.497 ************************************ 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:25:40.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79325 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79325 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 79325 ']' 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.498 23:07:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.498 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:40.498 Zero copy mechanism will not be used. 00:25:40.498 [2024-12-09 23:07:15.708519] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:25:40.498 [2024-12-09 23:07:15.708668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79325 ] 00:25:40.756 [2024-12-09 23:07:15.869419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.756 [2024-12-09 23:07:15.973233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.756 [2024-12-09 23:07:16.111080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.756 [2024-12-09 23:07:16.111283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 BaseBdev1_malloc 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 [2024-12-09 23:07:16.591904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:41.324 [2024-12-09 23:07:16.591969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.324 [2024-12-09 23:07:16.591991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:41.324 [2024-12-09 23:07:16.592003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.324 [2024-12-09 23:07:16.594204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.324 [2024-12-09 23:07:16.594359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:41.324 BaseBdev1 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 BaseBdev2_malloc 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 [2024-12-09 23:07:16.629436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:41.324 [2024-12-09 23:07:16.629640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.324 [2024-12-09 23:07:16.629671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:41.324 [2024-12-09 23:07:16.629683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.324 [2024-12-09 23:07:16.631848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.324 [2024-12-09 23:07:16.631885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:41.324 BaseBdev2 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 BaseBdev3_malloc 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.324 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.324 [2024-12-09 23:07:16.681294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:41.324 [2024-12-09 23:07:16.681358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.324 [2024-12-09 23:07:16.681381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:41.324 [2024-12-09 23:07:16.681392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.324 [2024-12-09 23:07:16.683601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.586 [2024-12-09 23:07:16.683773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:41.586 BaseBdev3 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.586 spare_malloc 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.586 spare_delay 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.586 [2024-12-09 23:07:16.727590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:41.586 [2024-12-09 23:07:16.727654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:41.586 [2024-12-09 23:07:16.727674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:41.586 [2024-12-09 23:07:16.727685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:41.586 [2024-12-09 23:07:16.730025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:41.586 [2024-12-09 23:07:16.730084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:41.586 spare 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.586 [2024-12-09 23:07:16.735716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.586 [2024-12-09 23:07:16.738115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.586 [2024-12-09 23:07:16.738217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:41.586 [2024-12-09 23:07:16.738352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:41.586 [2024-12-09 23:07:16.738372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:41.586 [2024-12-09 23:07:16.738720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:41.586 [2024-12-09 23:07:16.744262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:41.586 [2024-12-09 23:07:16.744304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:41.586 [2024-12-09 23:07:16.744576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.586 "name": "raid_bdev1", 00:25:41.586 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:41.586 "strip_size_kb": 64, 00:25:41.586 "state": "online", 00:25:41.586 "raid_level": "raid5f", 00:25:41.586 "superblock": false, 00:25:41.586 "num_base_bdevs": 3, 00:25:41.586 "num_base_bdevs_discovered": 3, 00:25:41.586 "num_base_bdevs_operational": 3, 00:25:41.586 "base_bdevs_list": [ 00:25:41.586 { 00:25:41.586 "name": "BaseBdev1", 00:25:41.586 "uuid": "9f22083d-1d69-5e96-9323-673799ae204a", 00:25:41.586 "is_configured": true, 00:25:41.586 "data_offset": 0, 00:25:41.586 "data_size": 65536 00:25:41.586 }, 00:25:41.586 { 00:25:41.586 "name": "BaseBdev2", 00:25:41.586 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:41.586 "is_configured": true, 00:25:41.586 "data_offset": 0, 00:25:41.586 "data_size": 65536 00:25:41.586 }, 00:25:41.586 { 00:25:41.586 "name": "BaseBdev3", 00:25:41.586 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:41.586 "is_configured": true, 00:25:41.586 "data_offset": 0, 00:25:41.586 "data_size": 65536 00:25:41.586 } 00:25:41.586 ] 00:25:41.586 }' 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.586 23:07:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:41.848 [2024-12-09 23:07:17.085405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:41.848 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:42.108 [2024-12-09 23:07:17.345298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:42.109 /dev/nbd0 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.109 1+0 records in 00:25:42.109 1+0 records out 00:25:42.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305981 s, 13.4 MB/s 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:25:42.109 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:25:42.444 512+0 records in 00:25:42.444 512+0 records out 00:25:42.444 67108864 bytes (67 MB, 64 MiB) copied, 0.362827 s, 185 MB/s 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.444 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.707 [2024-12-09 23:07:17.941363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.707 [2024-12-09 23:07:17.949522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.707 "name": "raid_bdev1", 00:25:42.707 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:42.707 "strip_size_kb": 64, 00:25:42.707 "state": "online", 00:25:42.707 "raid_level": "raid5f", 00:25:42.707 "superblock": false, 00:25:42.707 "num_base_bdevs": 3, 00:25:42.707 "num_base_bdevs_discovered": 2, 00:25:42.707 "num_base_bdevs_operational": 2, 00:25:42.707 "base_bdevs_list": [ 00:25:42.707 { 00:25:42.707 "name": null, 00:25:42.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.707 "is_configured": false, 00:25:42.707 "data_offset": 0, 00:25:42.707 "data_size": 65536 00:25:42.707 }, 00:25:42.707 { 00:25:42.707 "name": "BaseBdev2", 00:25:42.707 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:42.707 "is_configured": true, 00:25:42.707 "data_offset": 0, 00:25:42.707 "data_size": 65536 00:25:42.707 }, 00:25:42.707 { 00:25:42.707 "name": "BaseBdev3", 00:25:42.707 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:42.707 "is_configured": true, 00:25:42.707 "data_offset": 0, 00:25:42.707 "data_size": 65536 00:25:42.707 } 00:25:42.707 ] 00:25:42.707 }' 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.707 23:07:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.966 23:07:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:42.966 23:07:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.966 23:07:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.966 [2024-12-09 23:07:18.249584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:42.966 [2024-12-09 23:07:18.260655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:25:42.966 23:07:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.966 23:07:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:42.966 [2024-12-09 23:07:18.266266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:43.908 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.908 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:43.908 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:43.908 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:43.908 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.169 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.169 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.169 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.170 "name": "raid_bdev1", 00:25:44.170 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:44.170 "strip_size_kb": 64, 00:25:44.170 "state": "online", 00:25:44.170 "raid_level": "raid5f", 00:25:44.170 "superblock": false, 00:25:44.170 "num_base_bdevs": 3, 00:25:44.170 "num_base_bdevs_discovered": 3, 00:25:44.170 "num_base_bdevs_operational": 3, 00:25:44.170 "process": { 00:25:44.170 "type": "rebuild", 00:25:44.170 "target": "spare", 00:25:44.170 "progress": { 00:25:44.170 "blocks": 18432, 00:25:44.170 "percent": 14 00:25:44.170 } 00:25:44.170 }, 00:25:44.170 "base_bdevs_list": [ 00:25:44.170 { 00:25:44.170 "name": "spare", 00:25:44.170 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:44.170 "is_configured": true, 00:25:44.170 "data_offset": 0, 00:25:44.170 "data_size": 65536 00:25:44.170 }, 00:25:44.170 { 00:25:44.170 "name": "BaseBdev2", 00:25:44.170 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:44.170 "is_configured": true, 00:25:44.170 "data_offset": 0, 00:25:44.170 "data_size": 65536 00:25:44.170 }, 00:25:44.170 { 00:25:44.170 "name": "BaseBdev3", 00:25:44.170 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:44.170 "is_configured": true, 00:25:44.170 "data_offset": 0, 00:25:44.170 "data_size": 65536 00:25:44.170 } 00:25:44.170 ] 00:25:44.170 }' 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.170 [2024-12-09 23:07:19.375955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.170 [2024-12-09 23:07:19.377033] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:44.170 [2024-12-09 23:07:19.377091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.170 [2024-12-09 23:07:19.377127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:44.170 [2024-12-09 23:07:19.377137] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.170 "name": "raid_bdev1", 00:25:44.170 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:44.170 "strip_size_kb": 64, 00:25:44.170 "state": "online", 00:25:44.170 "raid_level": "raid5f", 00:25:44.170 "superblock": false, 00:25:44.170 "num_base_bdevs": 3, 00:25:44.170 "num_base_bdevs_discovered": 2, 00:25:44.170 "num_base_bdevs_operational": 2, 00:25:44.170 "base_bdevs_list": [ 00:25:44.170 { 00:25:44.170 "name": null, 00:25:44.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.170 "is_configured": false, 00:25:44.170 "data_offset": 0, 00:25:44.170 "data_size": 65536 00:25:44.170 }, 00:25:44.170 { 00:25:44.170 "name": "BaseBdev2", 00:25:44.170 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:44.170 "is_configured": true, 00:25:44.170 "data_offset": 0, 00:25:44.170 "data_size": 65536 00:25:44.170 }, 00:25:44.170 { 00:25:44.170 "name": "BaseBdev3", 00:25:44.170 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:44.170 "is_configured": true, 00:25:44.170 "data_offset": 0, 00:25:44.170 "data_size": 65536 00:25:44.170 } 00:25:44.170 ] 00:25:44.170 }' 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.170 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.429 "name": "raid_bdev1", 00:25:44.429 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:44.429 "strip_size_kb": 64, 00:25:44.429 "state": "online", 00:25:44.429 "raid_level": "raid5f", 00:25:44.429 "superblock": false, 00:25:44.429 "num_base_bdevs": 3, 00:25:44.429 "num_base_bdevs_discovered": 2, 00:25:44.429 "num_base_bdevs_operational": 2, 00:25:44.429 "base_bdevs_list": [ 00:25:44.429 { 00:25:44.429 "name": null, 00:25:44.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.429 "is_configured": false, 00:25:44.429 "data_offset": 0, 00:25:44.429 "data_size": 65536 00:25:44.429 }, 00:25:44.429 { 00:25:44.429 "name": "BaseBdev2", 00:25:44.429 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:44.429 "is_configured": true, 00:25:44.429 "data_offset": 0, 00:25:44.429 "data_size": 65536 00:25:44.429 }, 00:25:44.429 { 00:25:44.429 "name": "BaseBdev3", 00:25:44.429 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:44.429 "is_configured": true, 00:25:44.429 "data_offset": 0, 00:25:44.429 "data_size": 65536 00:25:44.429 } 00:25:44.429 ] 00:25:44.429 }' 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:44.429 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.700 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:44.700 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:44.700 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.700 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.700 [2024-12-09 23:07:19.816193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:44.700 [2024-12-09 23:07:19.826278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:25:44.700 23:07:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.700 23:07:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:44.700 [2024-12-09 23:07:19.831694] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.679 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:45.679 "name": "raid_bdev1", 00:25:45.679 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:45.679 "strip_size_kb": 64, 00:25:45.679 "state": "online", 00:25:45.679 "raid_level": "raid5f", 00:25:45.679 "superblock": false, 00:25:45.679 "num_base_bdevs": 3, 00:25:45.679 "num_base_bdevs_discovered": 3, 00:25:45.679 "num_base_bdevs_operational": 3, 00:25:45.679 "process": { 00:25:45.679 "type": "rebuild", 00:25:45.679 "target": "spare", 00:25:45.679 "progress": { 00:25:45.679 "blocks": 18432, 00:25:45.679 "percent": 14 00:25:45.679 } 00:25:45.679 }, 00:25:45.679 "base_bdevs_list": [ 00:25:45.679 { 00:25:45.679 "name": "spare", 00:25:45.679 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:45.680 "is_configured": true, 00:25:45.680 "data_offset": 0, 00:25:45.680 "data_size": 65536 00:25:45.680 }, 00:25:45.680 { 00:25:45.680 "name": "BaseBdev2", 00:25:45.680 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:45.680 "is_configured": true, 00:25:45.680 "data_offset": 0, 00:25:45.680 "data_size": 65536 00:25:45.680 }, 00:25:45.680 { 00:25:45.680 "name": "BaseBdev3", 00:25:45.680 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:45.680 "is_configured": true, 00:25:45.680 "data_offset": 0, 00:25:45.680 "data_size": 65536 00:25:45.680 } 00:25:45.680 ] 00:25:45.680 }' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:45.680 "name": "raid_bdev1", 00:25:45.680 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:45.680 "strip_size_kb": 64, 00:25:45.680 "state": "online", 00:25:45.680 "raid_level": "raid5f", 00:25:45.680 "superblock": false, 00:25:45.680 "num_base_bdevs": 3, 00:25:45.680 "num_base_bdevs_discovered": 3, 00:25:45.680 "num_base_bdevs_operational": 3, 00:25:45.680 "process": { 00:25:45.680 "type": "rebuild", 00:25:45.680 "target": "spare", 00:25:45.680 "progress": { 00:25:45.680 "blocks": 20480, 00:25:45.680 "percent": 15 00:25:45.680 } 00:25:45.680 }, 00:25:45.680 "base_bdevs_list": [ 00:25:45.680 { 00:25:45.680 "name": "spare", 00:25:45.680 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:45.680 "is_configured": true, 00:25:45.680 "data_offset": 0, 00:25:45.680 "data_size": 65536 00:25:45.680 }, 00:25:45.680 { 00:25:45.680 "name": "BaseBdev2", 00:25:45.680 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:45.680 "is_configured": true, 00:25:45.680 "data_offset": 0, 00:25:45.680 "data_size": 65536 00:25:45.680 }, 00:25:45.680 { 00:25:45.680 "name": "BaseBdev3", 00:25:45.680 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:45.680 "is_configured": true, 00:25:45.680 "data_offset": 0, 00:25:45.680 "data_size": 65536 00:25:45.680 } 00:25:45.680 ] 00:25:45.680 }' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.680 23:07:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:45.680 23:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.680 23:07:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:47.065 "name": "raid_bdev1", 00:25:47.065 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:47.065 "strip_size_kb": 64, 00:25:47.065 "state": "online", 00:25:47.065 "raid_level": "raid5f", 00:25:47.065 "superblock": false, 00:25:47.065 "num_base_bdevs": 3, 00:25:47.065 "num_base_bdevs_discovered": 3, 00:25:47.065 "num_base_bdevs_operational": 3, 00:25:47.065 "process": { 00:25:47.065 "type": "rebuild", 00:25:47.065 "target": "spare", 00:25:47.065 "progress": { 00:25:47.065 "blocks": 43008, 00:25:47.065 "percent": 32 00:25:47.065 } 00:25:47.065 }, 00:25:47.065 "base_bdevs_list": [ 00:25:47.065 { 00:25:47.065 "name": "spare", 00:25:47.065 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:47.065 "is_configured": true, 00:25:47.065 "data_offset": 0, 00:25:47.065 "data_size": 65536 00:25:47.065 }, 00:25:47.065 { 00:25:47.065 "name": "BaseBdev2", 00:25:47.065 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:47.065 "is_configured": true, 00:25:47.065 "data_offset": 0, 00:25:47.065 "data_size": 65536 00:25:47.065 }, 00:25:47.065 { 00:25:47.065 "name": "BaseBdev3", 00:25:47.065 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:47.065 "is_configured": true, 00:25:47.065 "data_offset": 0, 00:25:47.065 "data_size": 65536 00:25:47.065 } 00:25:47.065 ] 00:25:47.065 }' 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:47.065 23:07:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.001 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.001 "name": "raid_bdev1", 00:25:48.001 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:48.001 "strip_size_kb": 64, 00:25:48.001 "state": "online", 00:25:48.001 "raid_level": "raid5f", 00:25:48.001 "superblock": false, 00:25:48.001 "num_base_bdevs": 3, 00:25:48.001 "num_base_bdevs_discovered": 3, 00:25:48.001 "num_base_bdevs_operational": 3, 00:25:48.001 "process": { 00:25:48.001 "type": "rebuild", 00:25:48.001 "target": "spare", 00:25:48.001 "progress": { 00:25:48.001 "blocks": 65536, 00:25:48.001 "percent": 50 00:25:48.001 } 00:25:48.001 }, 00:25:48.001 "base_bdevs_list": [ 00:25:48.001 { 00:25:48.002 "name": "spare", 00:25:48.002 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:48.002 "is_configured": true, 00:25:48.002 "data_offset": 0, 00:25:48.002 "data_size": 65536 00:25:48.002 }, 00:25:48.002 { 00:25:48.002 "name": "BaseBdev2", 00:25:48.002 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:48.002 "is_configured": true, 00:25:48.002 "data_offset": 0, 00:25:48.002 "data_size": 65536 00:25:48.002 }, 00:25:48.002 { 00:25:48.002 "name": "BaseBdev3", 00:25:48.002 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:48.002 "is_configured": true, 00:25:48.002 "data_offset": 0, 00:25:48.002 "data_size": 65536 00:25:48.002 } 00:25:48.002 ] 00:25:48.002 }' 00:25:48.002 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.002 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.002 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.002 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.002 23:07:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.941 "name": "raid_bdev1", 00:25:48.941 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:48.941 "strip_size_kb": 64, 00:25:48.941 "state": "online", 00:25:48.941 "raid_level": "raid5f", 00:25:48.941 "superblock": false, 00:25:48.941 "num_base_bdevs": 3, 00:25:48.941 "num_base_bdevs_discovered": 3, 00:25:48.941 "num_base_bdevs_operational": 3, 00:25:48.941 "process": { 00:25:48.941 "type": "rebuild", 00:25:48.941 "target": "spare", 00:25:48.941 "progress": { 00:25:48.941 "blocks": 88064, 00:25:48.941 "percent": 67 00:25:48.941 } 00:25:48.941 }, 00:25:48.941 "base_bdevs_list": [ 00:25:48.941 { 00:25:48.941 "name": "spare", 00:25:48.941 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:48.941 "is_configured": true, 00:25:48.941 "data_offset": 0, 00:25:48.941 "data_size": 65536 00:25:48.941 }, 00:25:48.941 { 00:25:48.941 "name": "BaseBdev2", 00:25:48.941 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:48.941 "is_configured": true, 00:25:48.941 "data_offset": 0, 00:25:48.941 "data_size": 65536 00:25:48.941 }, 00:25:48.941 { 00:25:48.941 "name": "BaseBdev3", 00:25:48.941 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:48.941 "is_configured": true, 00:25:48.941 "data_offset": 0, 00:25:48.941 "data_size": 65536 00:25:48.941 } 00:25:48.941 ] 00:25:48.941 }' 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:48.941 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:49.201 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.201 23:07:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:50.143 "name": "raid_bdev1", 00:25:50.143 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:50.143 "strip_size_kb": 64, 00:25:50.143 "state": "online", 00:25:50.143 "raid_level": "raid5f", 00:25:50.143 "superblock": false, 00:25:50.143 "num_base_bdevs": 3, 00:25:50.143 "num_base_bdevs_discovered": 3, 00:25:50.143 "num_base_bdevs_operational": 3, 00:25:50.143 "process": { 00:25:50.143 "type": "rebuild", 00:25:50.143 "target": "spare", 00:25:50.143 "progress": { 00:25:50.143 "blocks": 110592, 00:25:50.143 "percent": 84 00:25:50.143 } 00:25:50.143 }, 00:25:50.143 "base_bdevs_list": [ 00:25:50.143 { 00:25:50.143 "name": "spare", 00:25:50.143 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:50.143 "is_configured": true, 00:25:50.143 "data_offset": 0, 00:25:50.143 "data_size": 65536 00:25:50.143 }, 00:25:50.143 { 00:25:50.143 "name": "BaseBdev2", 00:25:50.143 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:50.143 "is_configured": true, 00:25:50.143 "data_offset": 0, 00:25:50.143 "data_size": 65536 00:25:50.143 }, 00:25:50.143 { 00:25:50.143 "name": "BaseBdev3", 00:25:50.143 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:50.143 "is_configured": true, 00:25:50.143 "data_offset": 0, 00:25:50.143 "data_size": 65536 00:25:50.143 } 00:25:50.143 ] 00:25:50.143 }' 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.143 23:07:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:51.185 [2024-12-09 23:07:26.287714] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:51.185 [2024-12-09 23:07:26.287798] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:51.185 [2024-12-09 23:07:26.287838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.185 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.185 "name": "raid_bdev1", 00:25:51.185 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:51.186 "strip_size_kb": 64, 00:25:51.186 "state": "online", 00:25:51.186 "raid_level": "raid5f", 00:25:51.186 "superblock": false, 00:25:51.186 "num_base_bdevs": 3, 00:25:51.186 "num_base_bdevs_discovered": 3, 00:25:51.186 "num_base_bdevs_operational": 3, 00:25:51.186 "base_bdevs_list": [ 00:25:51.186 { 00:25:51.186 "name": "spare", 00:25:51.186 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:51.186 "is_configured": true, 00:25:51.186 "data_offset": 0, 00:25:51.186 "data_size": 65536 00:25:51.186 }, 00:25:51.186 { 00:25:51.186 "name": "BaseBdev2", 00:25:51.186 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:51.186 "is_configured": true, 00:25:51.186 "data_offset": 0, 00:25:51.186 "data_size": 65536 00:25:51.186 }, 00:25:51.186 { 00:25:51.186 "name": "BaseBdev3", 00:25:51.186 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:51.186 "is_configured": true, 00:25:51.186 "data_offset": 0, 00:25:51.186 "data_size": 65536 00:25:51.186 } 00:25:51.186 ] 00:25:51.186 }' 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.186 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.448 "name": "raid_bdev1", 00:25:51.448 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:51.448 "strip_size_kb": 64, 00:25:51.448 "state": "online", 00:25:51.448 "raid_level": "raid5f", 00:25:51.448 "superblock": false, 00:25:51.448 "num_base_bdevs": 3, 00:25:51.448 "num_base_bdevs_discovered": 3, 00:25:51.448 "num_base_bdevs_operational": 3, 00:25:51.448 "base_bdevs_list": [ 00:25:51.448 { 00:25:51.448 "name": "spare", 00:25:51.448 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:51.448 "is_configured": true, 00:25:51.448 "data_offset": 0, 00:25:51.448 "data_size": 65536 00:25:51.448 }, 00:25:51.448 { 00:25:51.448 "name": "BaseBdev2", 00:25:51.448 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:51.448 "is_configured": true, 00:25:51.448 "data_offset": 0, 00:25:51.448 "data_size": 65536 00:25:51.448 }, 00:25:51.448 { 00:25:51.448 "name": "BaseBdev3", 00:25:51.448 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:51.448 "is_configured": true, 00:25:51.448 "data_offset": 0, 00:25:51.448 "data_size": 65536 00:25:51.448 } 00:25:51.448 ] 00:25:51.448 }' 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.448 "name": "raid_bdev1", 00:25:51.448 "uuid": "4adc2ab4-1b12-4f96-beec-c279d83e5ee6", 00:25:51.448 "strip_size_kb": 64, 00:25:51.448 "state": "online", 00:25:51.448 "raid_level": "raid5f", 00:25:51.448 "superblock": false, 00:25:51.448 "num_base_bdevs": 3, 00:25:51.448 "num_base_bdevs_discovered": 3, 00:25:51.448 "num_base_bdevs_operational": 3, 00:25:51.448 "base_bdevs_list": [ 00:25:51.448 { 00:25:51.448 "name": "spare", 00:25:51.448 "uuid": "b0088cb4-6251-5aed-8879-6e173cc71406", 00:25:51.448 "is_configured": true, 00:25:51.448 "data_offset": 0, 00:25:51.448 "data_size": 65536 00:25:51.448 }, 00:25:51.448 { 00:25:51.448 "name": "BaseBdev2", 00:25:51.448 "uuid": "8f160925-1e4c-5ca5-8b00-f85ee8827eeb", 00:25:51.448 "is_configured": true, 00:25:51.448 "data_offset": 0, 00:25:51.448 "data_size": 65536 00:25:51.448 }, 00:25:51.448 { 00:25:51.448 "name": "BaseBdev3", 00:25:51.448 "uuid": "2622209a-4ca4-55f6-afb4-9398faeaa351", 00:25:51.448 "is_configured": true, 00:25:51.448 "data_offset": 0, 00:25:51.448 "data_size": 65536 00:25:51.448 } 00:25:51.448 ] 00:25:51.448 }' 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.448 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.707 [2024-12-09 23:07:26.954857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:51.707 [2024-12-09 23:07:26.954885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:51.707 [2024-12-09 23:07:26.954952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.707 [2024-12-09 23:07:26.955017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.707 [2024-12-09 23:07:26.955035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:51.707 23:07:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:51.968 /dev/nbd0 00:25:51.968 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:51.968 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:51.968 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:51.968 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:25:51.968 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:51.968 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:51.969 1+0 records in 00:25:51.969 1+0 records out 00:25:51.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210402 s, 19.5 MB/s 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:51.969 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:52.230 /dev/nbd1 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.230 1+0 records in 00:25:52.230 1+0 records out 00:25:52.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216726 s, 18.9 MB/s 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.230 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.491 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:52.766 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79325 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 79325 ']' 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 79325 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79325 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79325' 00:25:52.767 killing process with pid 79325 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 79325 00:25:52.767 Received shutdown signal, test time was about 60.000000 seconds 00:25:52.767 00:25:52.767 Latency(us) 00:25:52.767 [2024-12-09T23:07:28.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.767 [2024-12-09T23:07:28.130Z] =================================================================================================================== 00:25:52.767 [2024-12-09T23:07:28.130Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:52.767 [2024-12-09 23:07:27.980861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:52.767 23:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 79325 00:25:53.029 [2024-12-09 23:07:28.225938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:53.606 23:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:25:53.606 00:25:53.606 real 0m13.303s 00:25:53.606 user 0m16.037s 00:25:53.606 sys 0m1.469s 00:25:53.606 23:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.606 ************************************ 00:25:53.606 END TEST raid5f_rebuild_test 00:25:53.606 ************************************ 00:25:53.606 23:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.870 23:07:28 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:25:53.870 23:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:53.870 23:07:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.870 23:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:53.870 ************************************ 00:25:53.870 START TEST raid5f_rebuild_test_sb 00:25:53.870 ************************************ 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79747 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79747 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79747 ']' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.870 23:07:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:53.870 [2024-12-09 23:07:29.037280] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:25:53.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:53.870 Zero copy mechanism will not be used. 00:25:53.870 [2024-12-09 23:07:29.037680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79747 ] 00:25:53.870 [2024-12-09 23:07:29.191446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.128 [2024-12-09 23:07:29.293164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.128 [2024-12-09 23:07:29.429807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:54.128 [2024-12-09 23:07:29.429851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 BaseBdev1_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 [2024-12-09 23:07:29.877064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:54.698 [2024-12-09 23:07:29.877138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.698 [2024-12-09 23:07:29.877170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:54.698 [2024-12-09 23:07:29.877188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.698 [2024-12-09 23:07:29.879333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.698 [2024-12-09 23:07:29.879365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:54.698 BaseBdev1 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 BaseBdev2_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 [2024-12-09 23:07:29.913110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:54.698 [2024-12-09 23:07:29.913159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.698 [2024-12-09 23:07:29.913182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:54.698 [2024-12-09 23:07:29.913193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.698 [2024-12-09 23:07:29.915268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.698 [2024-12-09 23:07:29.915300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:54.698 BaseBdev2 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 BaseBdev3_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 [2024-12-09 23:07:29.962317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:54.698 [2024-12-09 23:07:29.962368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.698 [2024-12-09 23:07:29.962390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:54.698 [2024-12-09 23:07:29.962401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.698 [2024-12-09 23:07:29.964472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.698 [2024-12-09 23:07:29.964503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:54.698 BaseBdev3 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 spare_malloc 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 spare_delay 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 [2024-12-09 23:07:30.006904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:54.698 [2024-12-09 23:07:30.006950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.698 [2024-12-09 23:07:30.006967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:54.698 [2024-12-09 23:07:30.006977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.698 [2024-12-09 23:07:30.009150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.698 [2024-12-09 23:07:30.009183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:54.698 spare 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.698 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.699 [2024-12-09 23:07:30.014974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.699 [2024-12-09 23:07:30.016905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:54.699 [2024-12-09 23:07:30.016978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:54.699 [2024-12-09 23:07:30.017165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:54.699 [2024-12-09 23:07:30.017176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:54.699 [2024-12-09 23:07:30.017438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:54.699 [2024-12-09 23:07:30.021180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:54.699 [2024-12-09 23:07:30.021204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:54.699 [2024-12-09 23:07:30.021382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.699 "name": "raid_bdev1", 00:25:54.699 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:54.699 "strip_size_kb": 64, 00:25:54.699 "state": "online", 00:25:54.699 "raid_level": "raid5f", 00:25:54.699 "superblock": true, 00:25:54.699 "num_base_bdevs": 3, 00:25:54.699 "num_base_bdevs_discovered": 3, 00:25:54.699 "num_base_bdevs_operational": 3, 00:25:54.699 "base_bdevs_list": [ 00:25:54.699 { 00:25:54.699 "name": "BaseBdev1", 00:25:54.699 "uuid": "1d9565ea-7ab9-54ec-9d68-0935718674f0", 00:25:54.699 "is_configured": true, 00:25:54.699 "data_offset": 2048, 00:25:54.699 "data_size": 63488 00:25:54.699 }, 00:25:54.699 { 00:25:54.699 "name": "BaseBdev2", 00:25:54.699 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:54.699 "is_configured": true, 00:25:54.699 "data_offset": 2048, 00:25:54.699 "data_size": 63488 00:25:54.699 }, 00:25:54.699 { 00:25:54.699 "name": "BaseBdev3", 00:25:54.699 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:54.699 "is_configured": true, 00:25:54.699 "data_offset": 2048, 00:25:54.699 "data_size": 63488 00:25:54.699 } 00:25:54.699 ] 00:25:54.699 }' 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.699 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.265 [2024-12-09 23:07:30.337760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:55.265 [2024-12-09 23:07:30.581644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:55.265 /dev/nbd0 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:55.265 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:55.266 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:55.525 1+0 records in 00:25:55.525 1+0 records out 00:25:55.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342334 s, 12.0 MB/s 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:25:55.525 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:25:55.786 496+0 records in 00:25:55.786 496+0 records out 00:25:55.786 65011712 bytes (65 MB, 62 MiB) copied, 0.33916 s, 192 MB/s 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.786 23:07:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:56.047 [2024-12-09 23:07:31.158372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.047 [2024-12-09 23:07:31.166455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.047 "name": "raid_bdev1", 00:25:56.047 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:56.047 "strip_size_kb": 64, 00:25:56.047 "state": "online", 00:25:56.047 "raid_level": "raid5f", 00:25:56.047 "superblock": true, 00:25:56.047 "num_base_bdevs": 3, 00:25:56.047 "num_base_bdevs_discovered": 2, 00:25:56.047 "num_base_bdevs_operational": 2, 00:25:56.047 "base_bdevs_list": [ 00:25:56.047 { 00:25:56.047 "name": null, 00:25:56.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.047 "is_configured": false, 00:25:56.047 "data_offset": 0, 00:25:56.047 "data_size": 63488 00:25:56.047 }, 00:25:56.047 { 00:25:56.047 "name": "BaseBdev2", 00:25:56.047 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:56.047 "is_configured": true, 00:25:56.047 "data_offset": 2048, 00:25:56.047 "data_size": 63488 00:25:56.047 }, 00:25:56.047 { 00:25:56.047 "name": "BaseBdev3", 00:25:56.047 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:56.047 "is_configured": true, 00:25:56.047 "data_offset": 2048, 00:25:56.047 "data_size": 63488 00:25:56.047 } 00:25:56.047 ] 00:25:56.047 }' 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.047 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.307 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:56.307 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.307 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.307 [2024-12-09 23:07:31.526537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:56.307 [2024-12-09 23:07:31.537371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:25:56.307 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.307 23:07:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:56.307 [2024-12-09 23:07:31.542852] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:57.255 "name": "raid_bdev1", 00:25:57.255 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:57.255 "strip_size_kb": 64, 00:25:57.255 "state": "online", 00:25:57.255 "raid_level": "raid5f", 00:25:57.255 "superblock": true, 00:25:57.255 "num_base_bdevs": 3, 00:25:57.255 "num_base_bdevs_discovered": 3, 00:25:57.255 "num_base_bdevs_operational": 3, 00:25:57.255 "process": { 00:25:57.255 "type": "rebuild", 00:25:57.255 "target": "spare", 00:25:57.255 "progress": { 00:25:57.255 "blocks": 18432, 00:25:57.255 "percent": 14 00:25:57.255 } 00:25:57.255 }, 00:25:57.255 "base_bdevs_list": [ 00:25:57.255 { 00:25:57.255 "name": "spare", 00:25:57.255 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:25:57.255 "is_configured": true, 00:25:57.255 "data_offset": 2048, 00:25:57.255 "data_size": 63488 00:25:57.255 }, 00:25:57.255 { 00:25:57.255 "name": "BaseBdev2", 00:25:57.255 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:57.255 "is_configured": true, 00:25:57.255 "data_offset": 2048, 00:25:57.255 "data_size": 63488 00:25:57.255 }, 00:25:57.255 { 00:25:57.255 "name": "BaseBdev3", 00:25:57.255 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:57.255 "is_configured": true, 00:25:57.255 "data_offset": 2048, 00:25:57.255 "data_size": 63488 00:25:57.255 } 00:25:57.255 ] 00:25:57.255 }' 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:57.255 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.517 [2024-12-09 23:07:32.644029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.517 [2024-12-09 23:07:32.653235] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:57.517 [2024-12-09 23:07:32.653292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.517 [2024-12-09 23:07:32.653310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:57.517 [2024-12-09 23:07:32.653318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.517 "name": "raid_bdev1", 00:25:57.517 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:57.517 "strip_size_kb": 64, 00:25:57.517 "state": "online", 00:25:57.517 "raid_level": "raid5f", 00:25:57.517 "superblock": true, 00:25:57.517 "num_base_bdevs": 3, 00:25:57.517 "num_base_bdevs_discovered": 2, 00:25:57.517 "num_base_bdevs_operational": 2, 00:25:57.517 "base_bdevs_list": [ 00:25:57.517 { 00:25:57.517 "name": null, 00:25:57.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.517 "is_configured": false, 00:25:57.517 "data_offset": 0, 00:25:57.517 "data_size": 63488 00:25:57.517 }, 00:25:57.517 { 00:25:57.517 "name": "BaseBdev2", 00:25:57.517 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:57.517 "is_configured": true, 00:25:57.517 "data_offset": 2048, 00:25:57.517 "data_size": 63488 00:25:57.517 }, 00:25:57.517 { 00:25:57.517 "name": "BaseBdev3", 00:25:57.517 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:57.517 "is_configured": true, 00:25:57.517 "data_offset": 2048, 00:25:57.517 "data_size": 63488 00:25:57.517 } 00:25:57.517 ] 00:25:57.517 }' 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.517 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.779 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:57.780 "name": "raid_bdev1", 00:25:57.780 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:57.780 "strip_size_kb": 64, 00:25:57.780 "state": "online", 00:25:57.780 "raid_level": "raid5f", 00:25:57.780 "superblock": true, 00:25:57.780 "num_base_bdevs": 3, 00:25:57.780 "num_base_bdevs_discovered": 2, 00:25:57.780 "num_base_bdevs_operational": 2, 00:25:57.780 "base_bdevs_list": [ 00:25:57.780 { 00:25:57.780 "name": null, 00:25:57.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.780 "is_configured": false, 00:25:57.780 "data_offset": 0, 00:25:57.780 "data_size": 63488 00:25:57.780 }, 00:25:57.780 { 00:25:57.780 "name": "BaseBdev2", 00:25:57.780 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:57.780 "is_configured": true, 00:25:57.780 "data_offset": 2048, 00:25:57.780 "data_size": 63488 00:25:57.780 }, 00:25:57.780 { 00:25:57.780 "name": "BaseBdev3", 00:25:57.780 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:57.780 "is_configured": true, 00:25:57.780 "data_offset": 2048, 00:25:57.780 "data_size": 63488 00:25:57.780 } 00:25:57.780 ] 00:25:57.780 }' 00:25:57.780 23:07:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.780 [2024-12-09 23:07:33.067479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:57.780 [2024-12-09 23:07:33.077371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.780 23:07:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:57.780 [2024-12-09 23:07:33.082727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.736 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:58.997 "name": "raid_bdev1", 00:25:58.997 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:58.997 "strip_size_kb": 64, 00:25:58.997 "state": "online", 00:25:58.997 "raid_level": "raid5f", 00:25:58.997 "superblock": true, 00:25:58.997 "num_base_bdevs": 3, 00:25:58.997 "num_base_bdevs_discovered": 3, 00:25:58.997 "num_base_bdevs_operational": 3, 00:25:58.997 "process": { 00:25:58.997 "type": "rebuild", 00:25:58.997 "target": "spare", 00:25:58.997 "progress": { 00:25:58.997 "blocks": 20480, 00:25:58.997 "percent": 16 00:25:58.997 } 00:25:58.997 }, 00:25:58.997 "base_bdevs_list": [ 00:25:58.997 { 00:25:58.997 "name": "spare", 00:25:58.997 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": "BaseBdev2", 00:25:58.997 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": "BaseBdev3", 00:25:58.997 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 } 00:25:58.997 ] 00:25:58.997 }' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:58.997 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=462 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:58.997 "name": "raid_bdev1", 00:25:58.997 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:25:58.997 "strip_size_kb": 64, 00:25:58.997 "state": "online", 00:25:58.997 "raid_level": "raid5f", 00:25:58.997 "superblock": true, 00:25:58.997 "num_base_bdevs": 3, 00:25:58.997 "num_base_bdevs_discovered": 3, 00:25:58.997 "num_base_bdevs_operational": 3, 00:25:58.997 "process": { 00:25:58.997 "type": "rebuild", 00:25:58.997 "target": "spare", 00:25:58.997 "progress": { 00:25:58.997 "blocks": 20480, 00:25:58.997 "percent": 16 00:25:58.997 } 00:25:58.997 }, 00:25:58.997 "base_bdevs_list": [ 00:25:58.997 { 00:25:58.997 "name": "spare", 00:25:58.997 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": "BaseBdev2", 00:25:58.997 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 }, 00:25:58.997 { 00:25:58.997 "name": "BaseBdev3", 00:25:58.997 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:25:58.997 "is_configured": true, 00:25:58.997 "data_offset": 2048, 00:25:58.997 "data_size": 63488 00:25:58.997 } 00:25:58.997 ] 00:25:58.997 }' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.997 23:07:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.935 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.194 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.194 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:00.194 "name": "raid_bdev1", 00:26:00.194 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:00.194 "strip_size_kb": 64, 00:26:00.194 "state": "online", 00:26:00.194 "raid_level": "raid5f", 00:26:00.194 "superblock": true, 00:26:00.194 "num_base_bdevs": 3, 00:26:00.194 "num_base_bdevs_discovered": 3, 00:26:00.194 "num_base_bdevs_operational": 3, 00:26:00.194 "process": { 00:26:00.194 "type": "rebuild", 00:26:00.194 "target": "spare", 00:26:00.194 "progress": { 00:26:00.194 "blocks": 43008, 00:26:00.194 "percent": 33 00:26:00.194 } 00:26:00.194 }, 00:26:00.194 "base_bdevs_list": [ 00:26:00.194 { 00:26:00.194 "name": "spare", 00:26:00.194 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 2048, 00:26:00.194 "data_size": 63488 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "name": "BaseBdev2", 00:26:00.194 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 2048, 00:26:00.194 "data_size": 63488 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "name": "BaseBdev3", 00:26:00.194 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 2048, 00:26:00.194 "data_size": 63488 00:26:00.194 } 00:26:00.194 ] 00:26:00.194 }' 00:26:00.195 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:00.195 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:00.195 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:00.195 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.195 23:07:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:01.154 "name": "raid_bdev1", 00:26:01.154 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:01.154 "strip_size_kb": 64, 00:26:01.154 "state": "online", 00:26:01.154 "raid_level": "raid5f", 00:26:01.154 "superblock": true, 00:26:01.154 "num_base_bdevs": 3, 00:26:01.154 "num_base_bdevs_discovered": 3, 00:26:01.154 "num_base_bdevs_operational": 3, 00:26:01.154 "process": { 00:26:01.154 "type": "rebuild", 00:26:01.154 "target": "spare", 00:26:01.154 "progress": { 00:26:01.154 "blocks": 65536, 00:26:01.154 "percent": 51 00:26:01.154 } 00:26:01.154 }, 00:26:01.154 "base_bdevs_list": [ 00:26:01.154 { 00:26:01.154 "name": "spare", 00:26:01.154 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:01.154 "is_configured": true, 00:26:01.154 "data_offset": 2048, 00:26:01.154 "data_size": 63488 00:26:01.154 }, 00:26:01.154 { 00:26:01.154 "name": "BaseBdev2", 00:26:01.154 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:01.154 "is_configured": true, 00:26:01.154 "data_offset": 2048, 00:26:01.154 "data_size": 63488 00:26:01.154 }, 00:26:01.154 { 00:26:01.154 "name": "BaseBdev3", 00:26:01.154 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:01.154 "is_configured": true, 00:26:01.154 "data_offset": 2048, 00:26:01.154 "data_size": 63488 00:26:01.154 } 00:26:01.154 ] 00:26:01.154 }' 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:01.154 23:07:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:02.538 "name": "raid_bdev1", 00:26:02.538 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:02.538 "strip_size_kb": 64, 00:26:02.538 "state": "online", 00:26:02.538 "raid_level": "raid5f", 00:26:02.538 "superblock": true, 00:26:02.538 "num_base_bdevs": 3, 00:26:02.538 "num_base_bdevs_discovered": 3, 00:26:02.538 "num_base_bdevs_operational": 3, 00:26:02.538 "process": { 00:26:02.538 "type": "rebuild", 00:26:02.538 "target": "spare", 00:26:02.538 "progress": { 00:26:02.538 "blocks": 88064, 00:26:02.538 "percent": 69 00:26:02.538 } 00:26:02.538 }, 00:26:02.538 "base_bdevs_list": [ 00:26:02.538 { 00:26:02.538 "name": "spare", 00:26:02.538 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:02.538 "is_configured": true, 00:26:02.538 "data_offset": 2048, 00:26:02.538 "data_size": 63488 00:26:02.538 }, 00:26:02.538 { 00:26:02.538 "name": "BaseBdev2", 00:26:02.538 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:02.538 "is_configured": true, 00:26:02.538 "data_offset": 2048, 00:26:02.538 "data_size": 63488 00:26:02.538 }, 00:26:02.538 { 00:26:02.538 "name": "BaseBdev3", 00:26:02.538 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:02.538 "is_configured": true, 00:26:02.538 "data_offset": 2048, 00:26:02.538 "data_size": 63488 00:26:02.538 } 00:26:02.538 ] 00:26:02.538 }' 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.538 23:07:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:03.478 "name": "raid_bdev1", 00:26:03.478 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:03.478 "strip_size_kb": 64, 00:26:03.478 "state": "online", 00:26:03.478 "raid_level": "raid5f", 00:26:03.478 "superblock": true, 00:26:03.478 "num_base_bdevs": 3, 00:26:03.478 "num_base_bdevs_discovered": 3, 00:26:03.478 "num_base_bdevs_operational": 3, 00:26:03.478 "process": { 00:26:03.478 "type": "rebuild", 00:26:03.478 "target": "spare", 00:26:03.478 "progress": { 00:26:03.478 "blocks": 110592, 00:26:03.478 "percent": 87 00:26:03.478 } 00:26:03.478 }, 00:26:03.478 "base_bdevs_list": [ 00:26:03.478 { 00:26:03.478 "name": "spare", 00:26:03.478 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:03.478 "is_configured": true, 00:26:03.478 "data_offset": 2048, 00:26:03.478 "data_size": 63488 00:26:03.478 }, 00:26:03.478 { 00:26:03.478 "name": "BaseBdev2", 00:26:03.478 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:03.478 "is_configured": true, 00:26:03.478 "data_offset": 2048, 00:26:03.478 "data_size": 63488 00:26:03.478 }, 00:26:03.478 { 00:26:03.478 "name": "BaseBdev3", 00:26:03.478 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:03.478 "is_configured": true, 00:26:03.478 "data_offset": 2048, 00:26:03.478 "data_size": 63488 00:26:03.478 } 00:26:03.478 ] 00:26:03.478 }' 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.478 23:07:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:04.048 [2024-12-09 23:07:39.333254] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:04.048 [2024-12-09 23:07:39.333339] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:04.048 [2024-12-09 23:07:39.333442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.620 "name": "raid_bdev1", 00:26:04.620 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:04.620 "strip_size_kb": 64, 00:26:04.620 "state": "online", 00:26:04.620 "raid_level": "raid5f", 00:26:04.620 "superblock": true, 00:26:04.620 "num_base_bdevs": 3, 00:26:04.620 "num_base_bdevs_discovered": 3, 00:26:04.620 "num_base_bdevs_operational": 3, 00:26:04.620 "base_bdevs_list": [ 00:26:04.620 { 00:26:04.620 "name": "spare", 00:26:04.620 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:04.620 "is_configured": true, 00:26:04.620 "data_offset": 2048, 00:26:04.620 "data_size": 63488 00:26:04.620 }, 00:26:04.620 { 00:26:04.620 "name": "BaseBdev2", 00:26:04.620 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:04.620 "is_configured": true, 00:26:04.620 "data_offset": 2048, 00:26:04.620 "data_size": 63488 00:26:04.620 }, 00:26:04.620 { 00:26:04.620 "name": "BaseBdev3", 00:26:04.620 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:04.620 "is_configured": true, 00:26:04.620 "data_offset": 2048, 00:26:04.620 "data_size": 63488 00:26:04.620 } 00:26:04.620 ] 00:26:04.620 }' 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:04.620 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.621 "name": "raid_bdev1", 00:26:04.621 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:04.621 "strip_size_kb": 64, 00:26:04.621 "state": "online", 00:26:04.621 "raid_level": "raid5f", 00:26:04.621 "superblock": true, 00:26:04.621 "num_base_bdevs": 3, 00:26:04.621 "num_base_bdevs_discovered": 3, 00:26:04.621 "num_base_bdevs_operational": 3, 00:26:04.621 "base_bdevs_list": [ 00:26:04.621 { 00:26:04.621 "name": "spare", 00:26:04.621 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:04.621 "is_configured": true, 00:26:04.621 "data_offset": 2048, 00:26:04.621 "data_size": 63488 00:26:04.621 }, 00:26:04.621 { 00:26:04.621 "name": "BaseBdev2", 00:26:04.621 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:04.621 "is_configured": true, 00:26:04.621 "data_offset": 2048, 00:26:04.621 "data_size": 63488 00:26:04.621 }, 00:26:04.621 { 00:26:04.621 "name": "BaseBdev3", 00:26:04.621 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:04.621 "is_configured": true, 00:26:04.621 "data_offset": 2048, 00:26:04.621 "data_size": 63488 00:26:04.621 } 00:26:04.621 ] 00:26:04.621 }' 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.621 "name": "raid_bdev1", 00:26:04.621 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:04.621 "strip_size_kb": 64, 00:26:04.621 "state": "online", 00:26:04.621 "raid_level": "raid5f", 00:26:04.621 "superblock": true, 00:26:04.621 "num_base_bdevs": 3, 00:26:04.621 "num_base_bdevs_discovered": 3, 00:26:04.621 "num_base_bdevs_operational": 3, 00:26:04.621 "base_bdevs_list": [ 00:26:04.621 { 00:26:04.621 "name": "spare", 00:26:04.621 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:04.621 "is_configured": true, 00:26:04.621 "data_offset": 2048, 00:26:04.621 "data_size": 63488 00:26:04.621 }, 00:26:04.621 { 00:26:04.621 "name": "BaseBdev2", 00:26:04.621 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:04.621 "is_configured": true, 00:26:04.621 "data_offset": 2048, 00:26:04.621 "data_size": 63488 00:26:04.621 }, 00:26:04.621 { 00:26:04.621 "name": "BaseBdev3", 00:26:04.621 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:04.621 "is_configured": true, 00:26:04.621 "data_offset": 2048, 00:26:04.621 "data_size": 63488 00:26:04.621 } 00:26:04.621 ] 00:26:04.621 }' 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.621 23:07:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.882 [2024-12-09 23:07:40.223916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:04.882 [2024-12-09 23:07:40.223944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:04.882 [2024-12-09 23:07:40.224011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.882 [2024-12-09 23:07:40.224075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:04.882 [2024-12-09 23:07:40.224093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:26:04.882 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:05.144 /dev/nbd0 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:05.144 1+0 records in 00:26:05.144 1+0 records out 00:26:05.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251946 s, 16.3 MB/s 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.144 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:05.404 /dev/nbd1 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:05.404 1+0 records in 00:26:05.404 1+0 records out 00:26:05.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190155 s, 21.5 MB/s 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.404 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:05.665 23:07:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.926 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 [2024-12-09 23:07:41.295186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:06.188 [2024-12-09 23:07:41.295235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:06.188 [2024-12-09 23:07:41.295252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:06.188 [2024-12-09 23:07:41.295261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:06.188 [2024-12-09 23:07:41.297194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:06.188 [2024-12-09 23:07:41.297226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:06.188 [2024-12-09 23:07:41.297304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:06.188 [2024-12-09 23:07:41.297344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:06.188 [2024-12-09 23:07:41.297452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:06.188 [2024-12-09 23:07:41.297533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:06.188 spare 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 [2024-12-09 23:07:41.397610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:06.188 [2024-12-09 23:07:41.397654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:06.188 [2024-12-09 23:07:41.397924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:26:06.188 [2024-12-09 23:07:41.400887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:06.188 [2024-12-09 23:07:41.400908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:06.188 [2024-12-09 23:07:41.401070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.188 "name": "raid_bdev1", 00:26:06.188 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:06.188 "strip_size_kb": 64, 00:26:06.188 "state": "online", 00:26:06.188 "raid_level": "raid5f", 00:26:06.188 "superblock": true, 00:26:06.188 "num_base_bdevs": 3, 00:26:06.188 "num_base_bdevs_discovered": 3, 00:26:06.188 "num_base_bdevs_operational": 3, 00:26:06.188 "base_bdevs_list": [ 00:26:06.188 { 00:26:06.188 "name": "spare", 00:26:06.188 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:06.188 "is_configured": true, 00:26:06.188 "data_offset": 2048, 00:26:06.188 "data_size": 63488 00:26:06.188 }, 00:26:06.188 { 00:26:06.188 "name": "BaseBdev2", 00:26:06.188 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:06.188 "is_configured": true, 00:26:06.188 "data_offset": 2048, 00:26:06.188 "data_size": 63488 00:26:06.188 }, 00:26:06.188 { 00:26:06.188 "name": "BaseBdev3", 00:26:06.188 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:06.188 "is_configured": true, 00:26:06.188 "data_offset": 2048, 00:26:06.188 "data_size": 63488 00:26:06.188 } 00:26:06.188 ] 00:26:06.188 }' 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.188 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.449 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:06.449 "name": "raid_bdev1", 00:26:06.449 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:06.449 "strip_size_kb": 64, 00:26:06.449 "state": "online", 00:26:06.449 "raid_level": "raid5f", 00:26:06.449 "superblock": true, 00:26:06.449 "num_base_bdevs": 3, 00:26:06.449 "num_base_bdevs_discovered": 3, 00:26:06.449 "num_base_bdevs_operational": 3, 00:26:06.449 "base_bdevs_list": [ 00:26:06.449 { 00:26:06.449 "name": "spare", 00:26:06.449 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:06.449 "is_configured": true, 00:26:06.449 "data_offset": 2048, 00:26:06.449 "data_size": 63488 00:26:06.449 }, 00:26:06.449 { 00:26:06.449 "name": "BaseBdev2", 00:26:06.449 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:06.449 "is_configured": true, 00:26:06.449 "data_offset": 2048, 00:26:06.449 "data_size": 63488 00:26:06.449 }, 00:26:06.449 { 00:26:06.449 "name": "BaseBdev3", 00:26:06.450 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:06.450 "is_configured": true, 00:26:06.450 "data_offset": 2048, 00:26:06.450 "data_size": 63488 00:26:06.450 } 00:26:06.450 ] 00:26:06.450 }' 00:26:06.450 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:06.450 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:06.450 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.710 [2024-12-09 23:07:41.869146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:06.710 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.711 "name": "raid_bdev1", 00:26:06.711 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:06.711 "strip_size_kb": 64, 00:26:06.711 "state": "online", 00:26:06.711 "raid_level": "raid5f", 00:26:06.711 "superblock": true, 00:26:06.711 "num_base_bdevs": 3, 00:26:06.711 "num_base_bdevs_discovered": 2, 00:26:06.711 "num_base_bdevs_operational": 2, 00:26:06.711 "base_bdevs_list": [ 00:26:06.711 { 00:26:06.711 "name": null, 00:26:06.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.711 "is_configured": false, 00:26:06.711 "data_offset": 0, 00:26:06.711 "data_size": 63488 00:26:06.711 }, 00:26:06.711 { 00:26:06.711 "name": "BaseBdev2", 00:26:06.711 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:06.711 "is_configured": true, 00:26:06.711 "data_offset": 2048, 00:26:06.711 "data_size": 63488 00:26:06.711 }, 00:26:06.711 { 00:26:06.711 "name": "BaseBdev3", 00:26:06.711 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:06.711 "is_configured": true, 00:26:06.711 "data_offset": 2048, 00:26:06.711 "data_size": 63488 00:26:06.711 } 00:26:06.711 ] 00:26:06.711 }' 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.711 23:07:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.971 23:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:06.971 23:07:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.971 23:07:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.971 [2024-12-09 23:07:42.229218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:06.971 [2024-12-09 23:07:42.229371] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:06.971 [2024-12-09 23:07:42.229386] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:06.971 [2024-12-09 23:07:42.229416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:06.971 [2024-12-09 23:07:42.237814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:26:06.971 23:07:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.971 23:07:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:06.971 [2024-12-09 23:07:42.242220] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:07.916 "name": "raid_bdev1", 00:26:07.916 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:07.916 "strip_size_kb": 64, 00:26:07.916 "state": "online", 00:26:07.916 "raid_level": "raid5f", 00:26:07.916 "superblock": true, 00:26:07.916 "num_base_bdevs": 3, 00:26:07.916 "num_base_bdevs_discovered": 3, 00:26:07.916 "num_base_bdevs_operational": 3, 00:26:07.916 "process": { 00:26:07.916 "type": "rebuild", 00:26:07.916 "target": "spare", 00:26:07.916 "progress": { 00:26:07.916 "blocks": 18432, 00:26:07.916 "percent": 14 00:26:07.916 } 00:26:07.916 }, 00:26:07.916 "base_bdevs_list": [ 00:26:07.916 { 00:26:07.916 "name": "spare", 00:26:07.916 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:07.916 "is_configured": true, 00:26:07.916 "data_offset": 2048, 00:26:07.916 "data_size": 63488 00:26:07.916 }, 00:26:07.916 { 00:26:07.916 "name": "BaseBdev2", 00:26:07.916 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:07.916 "is_configured": true, 00:26:07.916 "data_offset": 2048, 00:26:07.916 "data_size": 63488 00:26:07.916 }, 00:26:07.916 { 00:26:07.916 "name": "BaseBdev3", 00:26:07.916 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:07.916 "is_configured": true, 00:26:07.916 "data_offset": 2048, 00:26:07.916 "data_size": 63488 00:26:07.916 } 00:26:07.916 ] 00:26:07.916 }' 00:26:07.916 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.178 [2024-12-09 23:07:43.343445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:08.178 [2024-12-09 23:07:43.351409] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:08.178 [2024-12-09 23:07:43.351462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.178 [2024-12-09 23:07:43.351475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:08.178 [2024-12-09 23:07:43.351482] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.178 "name": "raid_bdev1", 00:26:08.178 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:08.178 "strip_size_kb": 64, 00:26:08.178 "state": "online", 00:26:08.178 "raid_level": "raid5f", 00:26:08.178 "superblock": true, 00:26:08.178 "num_base_bdevs": 3, 00:26:08.178 "num_base_bdevs_discovered": 2, 00:26:08.178 "num_base_bdevs_operational": 2, 00:26:08.178 "base_bdevs_list": [ 00:26:08.178 { 00:26:08.178 "name": null, 00:26:08.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.178 "is_configured": false, 00:26:08.178 "data_offset": 0, 00:26:08.178 "data_size": 63488 00:26:08.178 }, 00:26:08.178 { 00:26:08.178 "name": "BaseBdev2", 00:26:08.178 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:08.178 "is_configured": true, 00:26:08.178 "data_offset": 2048, 00:26:08.178 "data_size": 63488 00:26:08.178 }, 00:26:08.178 { 00:26:08.178 "name": "BaseBdev3", 00:26:08.178 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:08.178 "is_configured": true, 00:26:08.178 "data_offset": 2048, 00:26:08.178 "data_size": 63488 00:26:08.178 } 00:26:08.178 ] 00:26:08.178 }' 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.178 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.439 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:08.439 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.439 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.439 [2024-12-09 23:07:43.737817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:08.439 [2024-12-09 23:07:43.737869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.439 [2024-12-09 23:07:43.737885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:08.439 [2024-12-09 23:07:43.737896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.439 [2024-12-09 23:07:43.738301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.439 [2024-12-09 23:07:43.738330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:08.439 [2024-12-09 23:07:43.738407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:08.439 [2024-12-09 23:07:43.738421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:08.439 [2024-12-09 23:07:43.738429] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:08.439 [2024-12-09 23:07:43.738450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:08.439 [2024-12-09 23:07:43.746712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:26:08.439 spare 00:26:08.439 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.439 23:07:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:08.439 [2024-12-09 23:07:43.751182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:09.828 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:09.829 "name": "raid_bdev1", 00:26:09.829 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:09.829 "strip_size_kb": 64, 00:26:09.829 "state": "online", 00:26:09.829 "raid_level": "raid5f", 00:26:09.829 "superblock": true, 00:26:09.829 "num_base_bdevs": 3, 00:26:09.829 "num_base_bdevs_discovered": 3, 00:26:09.829 "num_base_bdevs_operational": 3, 00:26:09.829 "process": { 00:26:09.829 "type": "rebuild", 00:26:09.829 "target": "spare", 00:26:09.829 "progress": { 00:26:09.829 "blocks": 20480, 00:26:09.829 "percent": 16 00:26:09.829 } 00:26:09.829 }, 00:26:09.829 "base_bdevs_list": [ 00:26:09.829 { 00:26:09.829 "name": "spare", 00:26:09.829 "uuid": "4b333b03-deb6-515a-883b-6f096771290d", 00:26:09.829 "is_configured": true, 00:26:09.829 "data_offset": 2048, 00:26:09.829 "data_size": 63488 00:26:09.829 }, 00:26:09.829 { 00:26:09.829 "name": "BaseBdev2", 00:26:09.829 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:09.829 "is_configured": true, 00:26:09.829 "data_offset": 2048, 00:26:09.829 "data_size": 63488 00:26:09.829 }, 00:26:09.829 { 00:26:09.829 "name": "BaseBdev3", 00:26:09.829 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:09.829 "is_configured": true, 00:26:09.829 "data_offset": 2048, 00:26:09.829 "data_size": 63488 00:26:09.829 } 00:26:09.829 ] 00:26:09.829 }' 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.829 [2024-12-09 23:07:44.864744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:09.829 [2024-12-09 23:07:44.960938] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:09.829 [2024-12-09 23:07:44.961001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.829 [2024-12-09 23:07:44.961016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:09.829 [2024-12-09 23:07:44.961022] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.829 23:07:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.829 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.829 "name": "raid_bdev1", 00:26:09.829 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:09.829 "strip_size_kb": 64, 00:26:09.829 "state": "online", 00:26:09.829 "raid_level": "raid5f", 00:26:09.829 "superblock": true, 00:26:09.829 "num_base_bdevs": 3, 00:26:09.829 "num_base_bdevs_discovered": 2, 00:26:09.829 "num_base_bdevs_operational": 2, 00:26:09.829 "base_bdevs_list": [ 00:26:09.829 { 00:26:09.829 "name": null, 00:26:09.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.829 "is_configured": false, 00:26:09.829 "data_offset": 0, 00:26:09.829 "data_size": 63488 00:26:09.829 }, 00:26:09.829 { 00:26:09.829 "name": "BaseBdev2", 00:26:09.830 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:09.830 "is_configured": true, 00:26:09.830 "data_offset": 2048, 00:26:09.830 "data_size": 63488 00:26:09.830 }, 00:26:09.830 { 00:26:09.830 "name": "BaseBdev3", 00:26:09.830 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:09.830 "is_configured": true, 00:26:09.830 "data_offset": 2048, 00:26:09.830 "data_size": 63488 00:26:09.830 } 00:26:09.830 ] 00:26:09.830 }' 00:26:09.830 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.830 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:10.093 "name": "raid_bdev1", 00:26:10.093 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:10.093 "strip_size_kb": 64, 00:26:10.093 "state": "online", 00:26:10.093 "raid_level": "raid5f", 00:26:10.093 "superblock": true, 00:26:10.093 "num_base_bdevs": 3, 00:26:10.093 "num_base_bdevs_discovered": 2, 00:26:10.093 "num_base_bdevs_operational": 2, 00:26:10.093 "base_bdevs_list": [ 00:26:10.093 { 00:26:10.093 "name": null, 00:26:10.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.093 "is_configured": false, 00:26:10.093 "data_offset": 0, 00:26:10.093 "data_size": 63488 00:26:10.093 }, 00:26:10.093 { 00:26:10.093 "name": "BaseBdev2", 00:26:10.093 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:10.093 "is_configured": true, 00:26:10.093 "data_offset": 2048, 00:26:10.093 "data_size": 63488 00:26:10.093 }, 00:26:10.093 { 00:26:10.093 "name": "BaseBdev3", 00:26:10.093 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:10.093 "is_configured": true, 00:26:10.093 "data_offset": 2048, 00:26:10.093 "data_size": 63488 00:26:10.093 } 00:26:10.093 ] 00:26:10.093 }' 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.093 [2024-12-09 23:07:45.423410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:10.093 [2024-12-09 23:07:45.423457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.093 [2024-12-09 23:07:45.423477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:10.093 [2024-12-09 23:07:45.423485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.093 [2024-12-09 23:07:45.423869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.093 [2024-12-09 23:07:45.423882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:10.093 [2024-12-09 23:07:45.423947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:10.093 [2024-12-09 23:07:45.423959] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:10.093 [2024-12-09 23:07:45.423967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:10.093 [2024-12-09 23:07:45.423974] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:10.093 BaseBdev1 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.093 23:07:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.079 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.344 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.344 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.344 "name": "raid_bdev1", 00:26:11.344 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:11.344 "strip_size_kb": 64, 00:26:11.344 "state": "online", 00:26:11.344 "raid_level": "raid5f", 00:26:11.344 "superblock": true, 00:26:11.344 "num_base_bdevs": 3, 00:26:11.344 "num_base_bdevs_discovered": 2, 00:26:11.344 "num_base_bdevs_operational": 2, 00:26:11.344 "base_bdevs_list": [ 00:26:11.344 { 00:26:11.344 "name": null, 00:26:11.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.344 "is_configured": false, 00:26:11.344 "data_offset": 0, 00:26:11.344 "data_size": 63488 00:26:11.344 }, 00:26:11.344 { 00:26:11.344 "name": "BaseBdev2", 00:26:11.344 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:11.344 "is_configured": true, 00:26:11.344 "data_offset": 2048, 00:26:11.344 "data_size": 63488 00:26:11.344 }, 00:26:11.344 { 00:26:11.344 "name": "BaseBdev3", 00:26:11.344 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:11.344 "is_configured": true, 00:26:11.344 "data_offset": 2048, 00:26:11.344 "data_size": 63488 00:26:11.344 } 00:26:11.344 ] 00:26:11.344 }' 00:26:11.344 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.344 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:11.611 "name": "raid_bdev1", 00:26:11.611 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:11.611 "strip_size_kb": 64, 00:26:11.611 "state": "online", 00:26:11.611 "raid_level": "raid5f", 00:26:11.611 "superblock": true, 00:26:11.611 "num_base_bdevs": 3, 00:26:11.611 "num_base_bdevs_discovered": 2, 00:26:11.611 "num_base_bdevs_operational": 2, 00:26:11.611 "base_bdevs_list": [ 00:26:11.611 { 00:26:11.611 "name": null, 00:26:11.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.611 "is_configured": false, 00:26:11.611 "data_offset": 0, 00:26:11.611 "data_size": 63488 00:26:11.611 }, 00:26:11.611 { 00:26:11.611 "name": "BaseBdev2", 00:26:11.611 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:11.611 "is_configured": true, 00:26:11.611 "data_offset": 2048, 00:26:11.611 "data_size": 63488 00:26:11.611 }, 00:26:11.611 { 00:26:11.611 "name": "BaseBdev3", 00:26:11.611 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:11.611 "is_configured": true, 00:26:11.611 "data_offset": 2048, 00:26:11.611 "data_size": 63488 00:26:11.611 } 00:26:11.611 ] 00:26:11.611 }' 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.611 [2024-12-09 23:07:46.867746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.611 [2024-12-09 23:07:46.867876] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:11.611 [2024-12-09 23:07:46.867888] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:11.611 request: 00:26:11.611 { 00:26:11.611 "base_bdev": "BaseBdev1", 00:26:11.611 "raid_bdev": "raid_bdev1", 00:26:11.611 "method": "bdev_raid_add_base_bdev", 00:26:11.611 "req_id": 1 00:26:11.611 } 00:26:11.611 Got JSON-RPC error response 00:26:11.611 response: 00:26:11.611 { 00:26:11.611 "code": -22, 00:26:11.611 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:11.611 } 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:11.611 23:07:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.557 "name": "raid_bdev1", 00:26:12.557 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:12.557 "strip_size_kb": 64, 00:26:12.557 "state": "online", 00:26:12.557 "raid_level": "raid5f", 00:26:12.557 "superblock": true, 00:26:12.557 "num_base_bdevs": 3, 00:26:12.557 "num_base_bdevs_discovered": 2, 00:26:12.557 "num_base_bdevs_operational": 2, 00:26:12.557 "base_bdevs_list": [ 00:26:12.557 { 00:26:12.557 "name": null, 00:26:12.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.557 "is_configured": false, 00:26:12.557 "data_offset": 0, 00:26:12.557 "data_size": 63488 00:26:12.557 }, 00:26:12.557 { 00:26:12.557 "name": "BaseBdev2", 00:26:12.557 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:12.557 "is_configured": true, 00:26:12.557 "data_offset": 2048, 00:26:12.557 "data_size": 63488 00:26:12.557 }, 00:26:12.557 { 00:26:12.557 "name": "BaseBdev3", 00:26:12.557 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:12.557 "is_configured": true, 00:26:12.557 "data_offset": 2048, 00:26:12.557 "data_size": 63488 00:26:12.557 } 00:26:12.557 ] 00:26:12.557 }' 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.557 23:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:13.129 "name": "raid_bdev1", 00:26:13.129 "uuid": "848e9bff-5cbd-4708-a0f5-0981484e88f5", 00:26:13.129 "strip_size_kb": 64, 00:26:13.129 "state": "online", 00:26:13.129 "raid_level": "raid5f", 00:26:13.129 "superblock": true, 00:26:13.129 "num_base_bdevs": 3, 00:26:13.129 "num_base_bdevs_discovered": 2, 00:26:13.129 "num_base_bdevs_operational": 2, 00:26:13.129 "base_bdevs_list": [ 00:26:13.129 { 00:26:13.129 "name": null, 00:26:13.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.129 "is_configured": false, 00:26:13.129 "data_offset": 0, 00:26:13.129 "data_size": 63488 00:26:13.129 }, 00:26:13.129 { 00:26:13.129 "name": "BaseBdev2", 00:26:13.129 "uuid": "ccbcfaba-00db-5707-8cdf-37d50765306c", 00:26:13.129 "is_configured": true, 00:26:13.129 "data_offset": 2048, 00:26:13.129 "data_size": 63488 00:26:13.129 }, 00:26:13.129 { 00:26:13.129 "name": "BaseBdev3", 00:26:13.129 "uuid": "bec92b49-fe8f-58b3-9c63-ee10ddadaf80", 00:26:13.129 "is_configured": true, 00:26:13.129 "data_offset": 2048, 00:26:13.129 "data_size": 63488 00:26:13.129 } 00:26:13.129 ] 00:26:13.129 }' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79747 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79747 ']' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79747 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79747 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.129 killing process with pid 79747 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79747' 00:26:13.129 Received shutdown signal, test time was about 60.000000 seconds 00:26:13.129 00:26:13.129 Latency(us) 00:26:13.129 [2024-12-09T23:07:48.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.129 [2024-12-09T23:07:48.492Z] =================================================================================================================== 00:26:13.129 [2024-12-09T23:07:48.492Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79747 00:26:13.129 23:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79747 00:26:13.129 [2024-12-09 23:07:48.360986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:13.129 [2024-12-09 23:07:48.361094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:13.129 [2024-12-09 23:07:48.361159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:13.129 [2024-12-09 23:07:48.361169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:13.390 [2024-12-09 23:07:48.559434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:13.983 23:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:26:13.983 00:26:13.983 real 0m20.154s 00:26:13.983 user 0m25.297s 00:26:13.983 sys 0m1.926s 00:26:13.983 23:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.983 23:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.983 ************************************ 00:26:13.983 END TEST raid5f_rebuild_test_sb 00:26:13.983 ************************************ 00:26:13.983 23:07:49 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:26:13.983 23:07:49 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:26:13.983 23:07:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:13.983 23:07:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.983 23:07:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:13.983 ************************************ 00:26:13.983 START TEST raid5f_state_function_test 00:26:13.984 ************************************ 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80461 00:26:13.984 Process raid pid: 80461 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80461' 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80461 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80461 ']' 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.984 23:07:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:13.984 [2024-12-09 23:07:49.246349] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:26:13.984 [2024-12-09 23:07:49.246465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.244 [2024-12-09 23:07:49.403000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.244 [2024-12-09 23:07:49.489307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.505 [2024-12-09 23:07:49.606589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:14.505 [2024-12-09 23:07:49.606623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.767 [2024-12-09 23:07:50.098511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:14.767 [2024-12-09 23:07:50.098559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:14.767 [2024-12-09 23:07:50.098569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:14.767 [2024-12-09 23:07:50.098581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:14.767 [2024-12-09 23:07:50.098587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:14.767 [2024-12-09 23:07:50.098593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:14.767 [2024-12-09 23:07:50.098598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:14.767 [2024-12-09 23:07:50.098606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.767 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.029 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.029 "name": "Existed_Raid", 00:26:15.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.029 "strip_size_kb": 64, 00:26:15.029 "state": "configuring", 00:26:15.029 "raid_level": "raid5f", 00:26:15.029 "superblock": false, 00:26:15.029 "num_base_bdevs": 4, 00:26:15.029 "num_base_bdevs_discovered": 0, 00:26:15.029 "num_base_bdevs_operational": 4, 00:26:15.029 "base_bdevs_list": [ 00:26:15.029 { 00:26:15.029 "name": "BaseBdev1", 00:26:15.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.029 "is_configured": false, 00:26:15.029 "data_offset": 0, 00:26:15.029 "data_size": 0 00:26:15.029 }, 00:26:15.029 { 00:26:15.029 "name": "BaseBdev2", 00:26:15.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.029 "is_configured": false, 00:26:15.029 "data_offset": 0, 00:26:15.029 "data_size": 0 00:26:15.029 }, 00:26:15.029 { 00:26:15.029 "name": "BaseBdev3", 00:26:15.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.030 "is_configured": false, 00:26:15.030 "data_offset": 0, 00:26:15.030 "data_size": 0 00:26:15.030 }, 00:26:15.030 { 00:26:15.030 "name": "BaseBdev4", 00:26:15.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.030 "is_configured": false, 00:26:15.030 "data_offset": 0, 00:26:15.030 "data_size": 0 00:26:15.030 } 00:26:15.030 ] 00:26:15.030 }' 00:26:15.030 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.030 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 [2024-12-09 23:07:50.414541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:15.291 [2024-12-09 23:07:50.414577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 [2024-12-09 23:07:50.422537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:15.291 [2024-12-09 23:07:50.422573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:15.291 [2024-12-09 23:07:50.422580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:15.291 [2024-12-09 23:07:50.422588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:15.291 [2024-12-09 23:07:50.422593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:15.291 [2024-12-09 23:07:50.422601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:15.291 [2024-12-09 23:07:50.422607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:15.291 [2024-12-09 23:07:50.422614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 [2024-12-09 23:07:50.451049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:15.291 BaseBdev1 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 [ 00:26:15.291 { 00:26:15.291 "name": "BaseBdev1", 00:26:15.291 "aliases": [ 00:26:15.291 "5b657219-3fa4-4212-af6c-61e4c9866289" 00:26:15.291 ], 00:26:15.291 "product_name": "Malloc disk", 00:26:15.291 "block_size": 512, 00:26:15.291 "num_blocks": 65536, 00:26:15.291 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:15.291 "assigned_rate_limits": { 00:26:15.291 "rw_ios_per_sec": 0, 00:26:15.291 "rw_mbytes_per_sec": 0, 00:26:15.291 "r_mbytes_per_sec": 0, 00:26:15.291 "w_mbytes_per_sec": 0 00:26:15.291 }, 00:26:15.291 "claimed": true, 00:26:15.291 "claim_type": "exclusive_write", 00:26:15.291 "zoned": false, 00:26:15.291 "supported_io_types": { 00:26:15.291 "read": true, 00:26:15.291 "write": true, 00:26:15.291 "unmap": true, 00:26:15.291 "flush": true, 00:26:15.291 "reset": true, 00:26:15.291 "nvme_admin": false, 00:26:15.291 "nvme_io": false, 00:26:15.291 "nvme_io_md": false, 00:26:15.291 "write_zeroes": true, 00:26:15.291 "zcopy": true, 00:26:15.291 "get_zone_info": false, 00:26:15.291 "zone_management": false, 00:26:15.291 "zone_append": false, 00:26:15.291 "compare": false, 00:26:15.291 "compare_and_write": false, 00:26:15.291 "abort": true, 00:26:15.291 "seek_hole": false, 00:26:15.291 "seek_data": false, 00:26:15.291 "copy": true, 00:26:15.291 "nvme_iov_md": false 00:26:15.291 }, 00:26:15.291 "memory_domains": [ 00:26:15.291 { 00:26:15.291 "dma_device_id": "system", 00:26:15.291 "dma_device_type": 1 00:26:15.291 }, 00:26:15.291 { 00:26:15.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.291 "dma_device_type": 2 00:26:15.291 } 00:26:15.291 ], 00:26:15.291 "driver_specific": {} 00:26:15.291 } 00:26:15.291 ] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.291 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.292 "name": "Existed_Raid", 00:26:15.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.292 "strip_size_kb": 64, 00:26:15.292 "state": "configuring", 00:26:15.292 "raid_level": "raid5f", 00:26:15.292 "superblock": false, 00:26:15.292 "num_base_bdevs": 4, 00:26:15.292 "num_base_bdevs_discovered": 1, 00:26:15.292 "num_base_bdevs_operational": 4, 00:26:15.292 "base_bdevs_list": [ 00:26:15.292 { 00:26:15.292 "name": "BaseBdev1", 00:26:15.292 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:15.292 "is_configured": true, 00:26:15.292 "data_offset": 0, 00:26:15.292 "data_size": 65536 00:26:15.292 }, 00:26:15.292 { 00:26:15.292 "name": "BaseBdev2", 00:26:15.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.292 "is_configured": false, 00:26:15.292 "data_offset": 0, 00:26:15.292 "data_size": 0 00:26:15.292 }, 00:26:15.292 { 00:26:15.292 "name": "BaseBdev3", 00:26:15.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.292 "is_configured": false, 00:26:15.292 "data_offset": 0, 00:26:15.292 "data_size": 0 00:26:15.292 }, 00:26:15.292 { 00:26:15.292 "name": "BaseBdev4", 00:26:15.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.292 "is_configured": false, 00:26:15.292 "data_offset": 0, 00:26:15.292 "data_size": 0 00:26:15.292 } 00:26:15.292 ] 00:26:15.292 }' 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.292 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.552 [2024-12-09 23:07:50.807163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:15.552 [2024-12-09 23:07:50.807327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.552 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.552 [2024-12-09 23:07:50.815221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:15.553 [2024-12-09 23:07:50.816828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:15.553 [2024-12-09 23:07:50.816867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:15.553 [2024-12-09 23:07:50.816875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:15.553 [2024-12-09 23:07:50.816883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:15.553 [2024-12-09 23:07:50.816889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:15.553 [2024-12-09 23:07:50.816896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.553 "name": "Existed_Raid", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.553 "strip_size_kb": 64, 00:26:15.553 "state": "configuring", 00:26:15.553 "raid_level": "raid5f", 00:26:15.553 "superblock": false, 00:26:15.553 "num_base_bdevs": 4, 00:26:15.553 "num_base_bdevs_discovered": 1, 00:26:15.553 "num_base_bdevs_operational": 4, 00:26:15.553 "base_bdevs_list": [ 00:26:15.553 { 00:26:15.553 "name": "BaseBdev1", 00:26:15.553 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:15.553 "is_configured": true, 00:26:15.553 "data_offset": 0, 00:26:15.553 "data_size": 65536 00:26:15.553 }, 00:26:15.553 { 00:26:15.553 "name": "BaseBdev2", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.553 "is_configured": false, 00:26:15.553 "data_offset": 0, 00:26:15.553 "data_size": 0 00:26:15.553 }, 00:26:15.553 { 00:26:15.553 "name": "BaseBdev3", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.553 "is_configured": false, 00:26:15.553 "data_offset": 0, 00:26:15.553 "data_size": 0 00:26:15.553 }, 00:26:15.553 { 00:26:15.553 "name": "BaseBdev4", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.553 "is_configured": false, 00:26:15.553 "data_offset": 0, 00:26:15.553 "data_size": 0 00:26:15.553 } 00:26:15.553 ] 00:26:15.553 }' 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.553 23:07:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.813 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:15.813 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.813 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.073 [2024-12-09 23:07:51.198123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:16.073 BaseBdev2 00:26:16.073 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.073 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:16.073 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.074 [ 00:26:16.074 { 00:26:16.074 "name": "BaseBdev2", 00:26:16.074 "aliases": [ 00:26:16.074 "c71ada4f-099f-4bcb-8437-e76ce71bfeb9" 00:26:16.074 ], 00:26:16.074 "product_name": "Malloc disk", 00:26:16.074 "block_size": 512, 00:26:16.074 "num_blocks": 65536, 00:26:16.074 "uuid": "c71ada4f-099f-4bcb-8437-e76ce71bfeb9", 00:26:16.074 "assigned_rate_limits": { 00:26:16.074 "rw_ios_per_sec": 0, 00:26:16.074 "rw_mbytes_per_sec": 0, 00:26:16.074 "r_mbytes_per_sec": 0, 00:26:16.074 "w_mbytes_per_sec": 0 00:26:16.074 }, 00:26:16.074 "claimed": true, 00:26:16.074 "claim_type": "exclusive_write", 00:26:16.074 "zoned": false, 00:26:16.074 "supported_io_types": { 00:26:16.074 "read": true, 00:26:16.074 "write": true, 00:26:16.074 "unmap": true, 00:26:16.074 "flush": true, 00:26:16.074 "reset": true, 00:26:16.074 "nvme_admin": false, 00:26:16.074 "nvme_io": false, 00:26:16.074 "nvme_io_md": false, 00:26:16.074 "write_zeroes": true, 00:26:16.074 "zcopy": true, 00:26:16.074 "get_zone_info": false, 00:26:16.074 "zone_management": false, 00:26:16.074 "zone_append": false, 00:26:16.074 "compare": false, 00:26:16.074 "compare_and_write": false, 00:26:16.074 "abort": true, 00:26:16.074 "seek_hole": false, 00:26:16.074 "seek_data": false, 00:26:16.074 "copy": true, 00:26:16.074 "nvme_iov_md": false 00:26:16.074 }, 00:26:16.074 "memory_domains": [ 00:26:16.074 { 00:26:16.074 "dma_device_id": "system", 00:26:16.074 "dma_device_type": 1 00:26:16.074 }, 00:26:16.074 { 00:26:16.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.074 "dma_device_type": 2 00:26:16.074 } 00:26:16.074 ], 00:26:16.074 "driver_specific": {} 00:26:16.074 } 00:26:16.074 ] 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.074 "name": "Existed_Raid", 00:26:16.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.074 "strip_size_kb": 64, 00:26:16.074 "state": "configuring", 00:26:16.074 "raid_level": "raid5f", 00:26:16.074 "superblock": false, 00:26:16.074 "num_base_bdevs": 4, 00:26:16.074 "num_base_bdevs_discovered": 2, 00:26:16.074 "num_base_bdevs_operational": 4, 00:26:16.074 "base_bdevs_list": [ 00:26:16.074 { 00:26:16.074 "name": "BaseBdev1", 00:26:16.074 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:16.074 "is_configured": true, 00:26:16.074 "data_offset": 0, 00:26:16.074 "data_size": 65536 00:26:16.074 }, 00:26:16.074 { 00:26:16.074 "name": "BaseBdev2", 00:26:16.074 "uuid": "c71ada4f-099f-4bcb-8437-e76ce71bfeb9", 00:26:16.074 "is_configured": true, 00:26:16.074 "data_offset": 0, 00:26:16.074 "data_size": 65536 00:26:16.074 }, 00:26:16.074 { 00:26:16.074 "name": "BaseBdev3", 00:26:16.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.074 "is_configured": false, 00:26:16.074 "data_offset": 0, 00:26:16.074 "data_size": 0 00:26:16.074 }, 00:26:16.074 { 00:26:16.074 "name": "BaseBdev4", 00:26:16.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.074 "is_configured": false, 00:26:16.074 "data_offset": 0, 00:26:16.074 "data_size": 0 00:26:16.074 } 00:26:16.074 ] 00:26:16.074 }' 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.074 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.334 [2024-12-09 23:07:51.584987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.334 BaseBdev3 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.334 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.334 [ 00:26:16.334 { 00:26:16.334 "name": "BaseBdev3", 00:26:16.334 "aliases": [ 00:26:16.334 "420db5dc-50b8-4fd0-bccd-a38b3f6a4794" 00:26:16.334 ], 00:26:16.334 "product_name": "Malloc disk", 00:26:16.334 "block_size": 512, 00:26:16.334 "num_blocks": 65536, 00:26:16.334 "uuid": "420db5dc-50b8-4fd0-bccd-a38b3f6a4794", 00:26:16.334 "assigned_rate_limits": { 00:26:16.334 "rw_ios_per_sec": 0, 00:26:16.334 "rw_mbytes_per_sec": 0, 00:26:16.334 "r_mbytes_per_sec": 0, 00:26:16.334 "w_mbytes_per_sec": 0 00:26:16.334 }, 00:26:16.334 "claimed": true, 00:26:16.334 "claim_type": "exclusive_write", 00:26:16.334 "zoned": false, 00:26:16.334 "supported_io_types": { 00:26:16.334 "read": true, 00:26:16.334 "write": true, 00:26:16.334 "unmap": true, 00:26:16.334 "flush": true, 00:26:16.334 "reset": true, 00:26:16.334 "nvme_admin": false, 00:26:16.334 "nvme_io": false, 00:26:16.334 "nvme_io_md": false, 00:26:16.335 "write_zeroes": true, 00:26:16.335 "zcopy": true, 00:26:16.335 "get_zone_info": false, 00:26:16.335 "zone_management": false, 00:26:16.335 "zone_append": false, 00:26:16.335 "compare": false, 00:26:16.335 "compare_and_write": false, 00:26:16.335 "abort": true, 00:26:16.335 "seek_hole": false, 00:26:16.335 "seek_data": false, 00:26:16.335 "copy": true, 00:26:16.335 "nvme_iov_md": false 00:26:16.335 }, 00:26:16.335 "memory_domains": [ 00:26:16.335 { 00:26:16.335 "dma_device_id": "system", 00:26:16.335 "dma_device_type": 1 00:26:16.335 }, 00:26:16.335 { 00:26:16.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.335 "dma_device_type": 2 00:26:16.335 } 00:26:16.335 ], 00:26:16.335 "driver_specific": {} 00:26:16.335 } 00:26:16.335 ] 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.335 "name": "Existed_Raid", 00:26:16.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.335 "strip_size_kb": 64, 00:26:16.335 "state": "configuring", 00:26:16.335 "raid_level": "raid5f", 00:26:16.335 "superblock": false, 00:26:16.335 "num_base_bdevs": 4, 00:26:16.335 "num_base_bdevs_discovered": 3, 00:26:16.335 "num_base_bdevs_operational": 4, 00:26:16.335 "base_bdevs_list": [ 00:26:16.335 { 00:26:16.335 "name": "BaseBdev1", 00:26:16.335 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:16.335 "is_configured": true, 00:26:16.335 "data_offset": 0, 00:26:16.335 "data_size": 65536 00:26:16.335 }, 00:26:16.335 { 00:26:16.335 "name": "BaseBdev2", 00:26:16.335 "uuid": "c71ada4f-099f-4bcb-8437-e76ce71bfeb9", 00:26:16.335 "is_configured": true, 00:26:16.335 "data_offset": 0, 00:26:16.335 "data_size": 65536 00:26:16.335 }, 00:26:16.335 { 00:26:16.335 "name": "BaseBdev3", 00:26:16.335 "uuid": "420db5dc-50b8-4fd0-bccd-a38b3f6a4794", 00:26:16.335 "is_configured": true, 00:26:16.335 "data_offset": 0, 00:26:16.335 "data_size": 65536 00:26:16.335 }, 00:26:16.335 { 00:26:16.335 "name": "BaseBdev4", 00:26:16.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.335 "is_configured": false, 00:26:16.335 "data_offset": 0, 00:26:16.335 "data_size": 0 00:26:16.335 } 00:26:16.335 ] 00:26:16.335 }' 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.335 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.595 [2024-12-09 23:07:51.939842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:16.595 [2024-12-09 23:07:51.940049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:16.595 [2024-12-09 23:07:51.940063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:16.595 [2024-12-09 23:07:51.940303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:16.595 [2024-12-09 23:07:51.944328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:16.595 [2024-12-09 23:07:51.944348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:16.595 [2024-12-09 23:07:51.944578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.595 BaseBdev4 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.595 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.856 [ 00:26:16.856 { 00:26:16.856 "name": "BaseBdev4", 00:26:16.856 "aliases": [ 00:26:16.856 "2bede6ab-eda9-4ba7-8ef0-5b5c614ba68a" 00:26:16.856 ], 00:26:16.856 "product_name": "Malloc disk", 00:26:16.856 "block_size": 512, 00:26:16.856 "num_blocks": 65536, 00:26:16.856 "uuid": "2bede6ab-eda9-4ba7-8ef0-5b5c614ba68a", 00:26:16.856 "assigned_rate_limits": { 00:26:16.856 "rw_ios_per_sec": 0, 00:26:16.856 "rw_mbytes_per_sec": 0, 00:26:16.856 "r_mbytes_per_sec": 0, 00:26:16.856 "w_mbytes_per_sec": 0 00:26:16.856 }, 00:26:16.856 "claimed": true, 00:26:16.856 "claim_type": "exclusive_write", 00:26:16.856 "zoned": false, 00:26:16.856 "supported_io_types": { 00:26:16.856 "read": true, 00:26:16.856 "write": true, 00:26:16.856 "unmap": true, 00:26:16.856 "flush": true, 00:26:16.856 "reset": true, 00:26:16.856 "nvme_admin": false, 00:26:16.856 "nvme_io": false, 00:26:16.856 "nvme_io_md": false, 00:26:16.856 "write_zeroes": true, 00:26:16.856 "zcopy": true, 00:26:16.856 "get_zone_info": false, 00:26:16.856 "zone_management": false, 00:26:16.856 "zone_append": false, 00:26:16.856 "compare": false, 00:26:16.856 "compare_and_write": false, 00:26:16.856 "abort": true, 00:26:16.856 "seek_hole": false, 00:26:16.856 "seek_data": false, 00:26:16.856 "copy": true, 00:26:16.856 "nvme_iov_md": false 00:26:16.856 }, 00:26:16.856 "memory_domains": [ 00:26:16.856 { 00:26:16.856 "dma_device_id": "system", 00:26:16.856 "dma_device_type": 1 00:26:16.856 }, 00:26:16.856 { 00:26:16.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.856 "dma_device_type": 2 00:26:16.856 } 00:26:16.856 ], 00:26:16.856 "driver_specific": {} 00:26:16.856 } 00:26:16.856 ] 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.856 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.857 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.857 23:07:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.857 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.857 "name": "Existed_Raid", 00:26:16.857 "uuid": "6ab39ded-708d-4941-912f-ae92ddf0c0d3", 00:26:16.857 "strip_size_kb": 64, 00:26:16.857 "state": "online", 00:26:16.857 "raid_level": "raid5f", 00:26:16.857 "superblock": false, 00:26:16.857 "num_base_bdevs": 4, 00:26:16.857 "num_base_bdevs_discovered": 4, 00:26:16.857 "num_base_bdevs_operational": 4, 00:26:16.857 "base_bdevs_list": [ 00:26:16.857 { 00:26:16.857 "name": "BaseBdev1", 00:26:16.857 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:16.857 "is_configured": true, 00:26:16.857 "data_offset": 0, 00:26:16.857 "data_size": 65536 00:26:16.857 }, 00:26:16.857 { 00:26:16.857 "name": "BaseBdev2", 00:26:16.857 "uuid": "c71ada4f-099f-4bcb-8437-e76ce71bfeb9", 00:26:16.857 "is_configured": true, 00:26:16.857 "data_offset": 0, 00:26:16.857 "data_size": 65536 00:26:16.857 }, 00:26:16.857 { 00:26:16.857 "name": "BaseBdev3", 00:26:16.857 "uuid": "420db5dc-50b8-4fd0-bccd-a38b3f6a4794", 00:26:16.857 "is_configured": true, 00:26:16.857 "data_offset": 0, 00:26:16.857 "data_size": 65536 00:26:16.857 }, 00:26:16.857 { 00:26:16.857 "name": "BaseBdev4", 00:26:16.857 "uuid": "2bede6ab-eda9-4ba7-8ef0-5b5c614ba68a", 00:26:16.857 "is_configured": true, 00:26:16.857 "data_offset": 0, 00:26:16.857 "data_size": 65536 00:26:16.857 } 00:26:16.857 ] 00:26:16.857 }' 00:26:16.857 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.857 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.118 [2024-12-09 23:07:52.305080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.118 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:17.118 "name": "Existed_Raid", 00:26:17.118 "aliases": [ 00:26:17.118 "6ab39ded-708d-4941-912f-ae92ddf0c0d3" 00:26:17.118 ], 00:26:17.118 "product_name": "Raid Volume", 00:26:17.118 "block_size": 512, 00:26:17.118 "num_blocks": 196608, 00:26:17.118 "uuid": "6ab39ded-708d-4941-912f-ae92ddf0c0d3", 00:26:17.118 "assigned_rate_limits": { 00:26:17.118 "rw_ios_per_sec": 0, 00:26:17.118 "rw_mbytes_per_sec": 0, 00:26:17.118 "r_mbytes_per_sec": 0, 00:26:17.118 "w_mbytes_per_sec": 0 00:26:17.118 }, 00:26:17.118 "claimed": false, 00:26:17.118 "zoned": false, 00:26:17.118 "supported_io_types": { 00:26:17.118 "read": true, 00:26:17.118 "write": true, 00:26:17.118 "unmap": false, 00:26:17.118 "flush": false, 00:26:17.118 "reset": true, 00:26:17.118 "nvme_admin": false, 00:26:17.118 "nvme_io": false, 00:26:17.118 "nvme_io_md": false, 00:26:17.118 "write_zeroes": true, 00:26:17.118 "zcopy": false, 00:26:17.118 "get_zone_info": false, 00:26:17.118 "zone_management": false, 00:26:17.118 "zone_append": false, 00:26:17.118 "compare": false, 00:26:17.118 "compare_and_write": false, 00:26:17.118 "abort": false, 00:26:17.118 "seek_hole": false, 00:26:17.118 "seek_data": false, 00:26:17.118 "copy": false, 00:26:17.118 "nvme_iov_md": false 00:26:17.119 }, 00:26:17.119 "driver_specific": { 00:26:17.119 "raid": { 00:26:17.119 "uuid": "6ab39ded-708d-4941-912f-ae92ddf0c0d3", 00:26:17.119 "strip_size_kb": 64, 00:26:17.119 "state": "online", 00:26:17.119 "raid_level": "raid5f", 00:26:17.119 "superblock": false, 00:26:17.119 "num_base_bdevs": 4, 00:26:17.119 "num_base_bdevs_discovered": 4, 00:26:17.119 "num_base_bdevs_operational": 4, 00:26:17.119 "base_bdevs_list": [ 00:26:17.119 { 00:26:17.119 "name": "BaseBdev1", 00:26:17.119 "uuid": "5b657219-3fa4-4212-af6c-61e4c9866289", 00:26:17.119 "is_configured": true, 00:26:17.119 "data_offset": 0, 00:26:17.119 "data_size": 65536 00:26:17.119 }, 00:26:17.119 { 00:26:17.119 "name": "BaseBdev2", 00:26:17.119 "uuid": "c71ada4f-099f-4bcb-8437-e76ce71bfeb9", 00:26:17.119 "is_configured": true, 00:26:17.119 "data_offset": 0, 00:26:17.119 "data_size": 65536 00:26:17.119 }, 00:26:17.119 { 00:26:17.119 "name": "BaseBdev3", 00:26:17.119 "uuid": "420db5dc-50b8-4fd0-bccd-a38b3f6a4794", 00:26:17.119 "is_configured": true, 00:26:17.119 "data_offset": 0, 00:26:17.119 "data_size": 65536 00:26:17.119 }, 00:26:17.119 { 00:26:17.119 "name": "BaseBdev4", 00:26:17.119 "uuid": "2bede6ab-eda9-4ba7-8ef0-5b5c614ba68a", 00:26:17.119 "is_configured": true, 00:26:17.119 "data_offset": 0, 00:26:17.119 "data_size": 65536 00:26:17.119 } 00:26:17.119 ] 00:26:17.119 } 00:26:17.119 } 00:26:17.119 }' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:17.119 BaseBdev2 00:26:17.119 BaseBdev3 00:26:17.119 BaseBdev4' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.119 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 [2024-12-09 23:07:52.520994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.380 "name": "Existed_Raid", 00:26:17.380 "uuid": "6ab39ded-708d-4941-912f-ae92ddf0c0d3", 00:26:17.380 "strip_size_kb": 64, 00:26:17.380 "state": "online", 00:26:17.380 "raid_level": "raid5f", 00:26:17.380 "superblock": false, 00:26:17.380 "num_base_bdevs": 4, 00:26:17.380 "num_base_bdevs_discovered": 3, 00:26:17.380 "num_base_bdevs_operational": 3, 00:26:17.380 "base_bdevs_list": [ 00:26:17.380 { 00:26:17.380 "name": null, 00:26:17.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.380 "is_configured": false, 00:26:17.380 "data_offset": 0, 00:26:17.380 "data_size": 65536 00:26:17.380 }, 00:26:17.380 { 00:26:17.380 "name": "BaseBdev2", 00:26:17.380 "uuid": "c71ada4f-099f-4bcb-8437-e76ce71bfeb9", 00:26:17.380 "is_configured": true, 00:26:17.380 "data_offset": 0, 00:26:17.380 "data_size": 65536 00:26:17.380 }, 00:26:17.380 { 00:26:17.380 "name": "BaseBdev3", 00:26:17.380 "uuid": "420db5dc-50b8-4fd0-bccd-a38b3f6a4794", 00:26:17.380 "is_configured": true, 00:26:17.380 "data_offset": 0, 00:26:17.380 "data_size": 65536 00:26:17.380 }, 00:26:17.380 { 00:26:17.380 "name": "BaseBdev4", 00:26:17.380 "uuid": "2bede6ab-eda9-4ba7-8ef0-5b5c614ba68a", 00:26:17.380 "is_configured": true, 00:26:17.380 "data_offset": 0, 00:26:17.380 "data_size": 65536 00:26:17.380 } 00:26:17.380 ] 00:26:17.380 }' 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.380 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.649 [2024-12-09 23:07:52.940690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:17.649 [2024-12-09 23:07:52.940773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.649 [2024-12-09 23:07:52.988599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.649 23:07:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.913 [2024-12-09 23:07:53.032665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.913 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.913 [2024-12-09 23:07:53.133031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:17.914 [2024-12-09 23:07:53.133077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.914 BaseBdev2 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.914 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 [ 00:26:18.175 { 00:26:18.175 "name": "BaseBdev2", 00:26:18.175 "aliases": [ 00:26:18.175 "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c" 00:26:18.175 ], 00:26:18.175 "product_name": "Malloc disk", 00:26:18.175 "block_size": 512, 00:26:18.175 "num_blocks": 65536, 00:26:18.175 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:18.175 "assigned_rate_limits": { 00:26:18.175 "rw_ios_per_sec": 0, 00:26:18.175 "rw_mbytes_per_sec": 0, 00:26:18.175 "r_mbytes_per_sec": 0, 00:26:18.175 "w_mbytes_per_sec": 0 00:26:18.175 }, 00:26:18.175 "claimed": false, 00:26:18.175 "zoned": false, 00:26:18.175 "supported_io_types": { 00:26:18.175 "read": true, 00:26:18.175 "write": true, 00:26:18.175 "unmap": true, 00:26:18.175 "flush": true, 00:26:18.175 "reset": true, 00:26:18.175 "nvme_admin": false, 00:26:18.175 "nvme_io": false, 00:26:18.175 "nvme_io_md": false, 00:26:18.175 "write_zeroes": true, 00:26:18.175 "zcopy": true, 00:26:18.175 "get_zone_info": false, 00:26:18.175 "zone_management": false, 00:26:18.175 "zone_append": false, 00:26:18.175 "compare": false, 00:26:18.175 "compare_and_write": false, 00:26:18.175 "abort": true, 00:26:18.175 "seek_hole": false, 00:26:18.175 "seek_data": false, 00:26:18.175 "copy": true, 00:26:18.175 "nvme_iov_md": false 00:26:18.175 }, 00:26:18.175 "memory_domains": [ 00:26:18.175 { 00:26:18.175 "dma_device_id": "system", 00:26:18.175 "dma_device_type": 1 00:26:18.175 }, 00:26:18.175 { 00:26:18.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.175 "dma_device_type": 2 00:26:18.175 } 00:26:18.175 ], 00:26:18.175 "driver_specific": {} 00:26:18.175 } 00:26:18.175 ] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 BaseBdev3 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 [ 00:26:18.175 { 00:26:18.175 "name": "BaseBdev3", 00:26:18.175 "aliases": [ 00:26:18.175 "646cbfed-054d-4745-9528-2a85bbc5b304" 00:26:18.175 ], 00:26:18.175 "product_name": "Malloc disk", 00:26:18.175 "block_size": 512, 00:26:18.175 "num_blocks": 65536, 00:26:18.175 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:18.175 "assigned_rate_limits": { 00:26:18.175 "rw_ios_per_sec": 0, 00:26:18.175 "rw_mbytes_per_sec": 0, 00:26:18.175 "r_mbytes_per_sec": 0, 00:26:18.175 "w_mbytes_per_sec": 0 00:26:18.175 }, 00:26:18.175 "claimed": false, 00:26:18.175 "zoned": false, 00:26:18.175 "supported_io_types": { 00:26:18.175 "read": true, 00:26:18.175 "write": true, 00:26:18.175 "unmap": true, 00:26:18.175 "flush": true, 00:26:18.175 "reset": true, 00:26:18.175 "nvme_admin": false, 00:26:18.175 "nvme_io": false, 00:26:18.175 "nvme_io_md": false, 00:26:18.175 "write_zeroes": true, 00:26:18.175 "zcopy": true, 00:26:18.175 "get_zone_info": false, 00:26:18.175 "zone_management": false, 00:26:18.175 "zone_append": false, 00:26:18.175 "compare": false, 00:26:18.175 "compare_and_write": false, 00:26:18.175 "abort": true, 00:26:18.175 "seek_hole": false, 00:26:18.175 "seek_data": false, 00:26:18.175 "copy": true, 00:26:18.175 "nvme_iov_md": false 00:26:18.175 }, 00:26:18.175 "memory_domains": [ 00:26:18.175 { 00:26:18.175 "dma_device_id": "system", 00:26:18.175 "dma_device_type": 1 00:26:18.175 }, 00:26:18.175 { 00:26:18.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.175 "dma_device_type": 2 00:26:18.175 } 00:26:18.175 ], 00:26:18.175 "driver_specific": {} 00:26:18.175 } 00:26:18.175 ] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 BaseBdev4 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.175 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.175 [ 00:26:18.175 { 00:26:18.175 "name": "BaseBdev4", 00:26:18.175 "aliases": [ 00:26:18.175 "5998a427-0a10-4f62-bdb6-7abcb35e4843" 00:26:18.175 ], 00:26:18.175 "product_name": "Malloc disk", 00:26:18.175 "block_size": 512, 00:26:18.175 "num_blocks": 65536, 00:26:18.175 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:18.176 "assigned_rate_limits": { 00:26:18.176 "rw_ios_per_sec": 0, 00:26:18.176 "rw_mbytes_per_sec": 0, 00:26:18.176 "r_mbytes_per_sec": 0, 00:26:18.176 "w_mbytes_per_sec": 0 00:26:18.176 }, 00:26:18.176 "claimed": false, 00:26:18.176 "zoned": false, 00:26:18.176 "supported_io_types": { 00:26:18.176 "read": true, 00:26:18.176 "write": true, 00:26:18.176 "unmap": true, 00:26:18.176 "flush": true, 00:26:18.176 "reset": true, 00:26:18.176 "nvme_admin": false, 00:26:18.176 "nvme_io": false, 00:26:18.176 "nvme_io_md": false, 00:26:18.176 "write_zeroes": true, 00:26:18.176 "zcopy": true, 00:26:18.176 "get_zone_info": false, 00:26:18.176 "zone_management": false, 00:26:18.176 "zone_append": false, 00:26:18.176 "compare": false, 00:26:18.176 "compare_and_write": false, 00:26:18.176 "abort": true, 00:26:18.176 "seek_hole": false, 00:26:18.176 "seek_data": false, 00:26:18.176 "copy": true, 00:26:18.176 "nvme_iov_md": false 00:26:18.176 }, 00:26:18.176 "memory_domains": [ 00:26:18.176 { 00:26:18.176 "dma_device_id": "system", 00:26:18.176 "dma_device_type": 1 00:26:18.176 }, 00:26:18.176 { 00:26:18.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.176 "dma_device_type": 2 00:26:18.176 } 00:26:18.176 ], 00:26:18.176 "driver_specific": {} 00:26:18.176 } 00:26:18.176 ] 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.176 [2024-12-09 23:07:53.387850] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:18.176 [2024-12-09 23:07:53.387898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:18.176 [2024-12-09 23:07:53.387919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:18.176 [2024-12-09 23:07:53.389588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:18.176 [2024-12-09 23:07:53.389632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.176 "name": "Existed_Raid", 00:26:18.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.176 "strip_size_kb": 64, 00:26:18.176 "state": "configuring", 00:26:18.176 "raid_level": "raid5f", 00:26:18.176 "superblock": false, 00:26:18.176 "num_base_bdevs": 4, 00:26:18.176 "num_base_bdevs_discovered": 3, 00:26:18.176 "num_base_bdevs_operational": 4, 00:26:18.176 "base_bdevs_list": [ 00:26:18.176 { 00:26:18.176 "name": "BaseBdev1", 00:26:18.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.176 "is_configured": false, 00:26:18.176 "data_offset": 0, 00:26:18.176 "data_size": 0 00:26:18.176 }, 00:26:18.176 { 00:26:18.176 "name": "BaseBdev2", 00:26:18.176 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:18.176 "is_configured": true, 00:26:18.176 "data_offset": 0, 00:26:18.176 "data_size": 65536 00:26:18.176 }, 00:26:18.176 { 00:26:18.176 "name": "BaseBdev3", 00:26:18.176 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:18.176 "is_configured": true, 00:26:18.176 "data_offset": 0, 00:26:18.176 "data_size": 65536 00:26:18.176 }, 00:26:18.176 { 00:26:18.176 "name": "BaseBdev4", 00:26:18.176 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:18.176 "is_configured": true, 00:26:18.176 "data_offset": 0, 00:26:18.176 "data_size": 65536 00:26:18.176 } 00:26:18.176 ] 00:26:18.176 }' 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.176 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.438 [2024-12-09 23:07:53.707916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.438 "name": "Existed_Raid", 00:26:18.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.438 "strip_size_kb": 64, 00:26:18.438 "state": "configuring", 00:26:18.438 "raid_level": "raid5f", 00:26:18.438 "superblock": false, 00:26:18.438 "num_base_bdevs": 4, 00:26:18.438 "num_base_bdevs_discovered": 2, 00:26:18.438 "num_base_bdevs_operational": 4, 00:26:18.438 "base_bdevs_list": [ 00:26:18.438 { 00:26:18.438 "name": "BaseBdev1", 00:26:18.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.438 "is_configured": false, 00:26:18.438 "data_offset": 0, 00:26:18.438 "data_size": 0 00:26:18.438 }, 00:26:18.438 { 00:26:18.438 "name": null, 00:26:18.438 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:18.438 "is_configured": false, 00:26:18.438 "data_offset": 0, 00:26:18.438 "data_size": 65536 00:26:18.438 }, 00:26:18.438 { 00:26:18.438 "name": "BaseBdev3", 00:26:18.438 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:18.438 "is_configured": true, 00:26:18.438 "data_offset": 0, 00:26:18.438 "data_size": 65536 00:26:18.438 }, 00:26:18.438 { 00:26:18.438 "name": "BaseBdev4", 00:26:18.438 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:18.438 "is_configured": true, 00:26:18.438 "data_offset": 0, 00:26:18.438 "data_size": 65536 00:26:18.438 } 00:26:18.438 ] 00:26:18.438 }' 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.438 23:07:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.700 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:18.700 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.700 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.700 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.700 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.962 [2024-12-09 23:07:54.086267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:18.962 BaseBdev1 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.962 [ 00:26:18.962 { 00:26:18.962 "name": "BaseBdev1", 00:26:18.962 "aliases": [ 00:26:18.962 "1114d6a7-20e8-47b9-9526-5bde329e8323" 00:26:18.962 ], 00:26:18.962 "product_name": "Malloc disk", 00:26:18.962 "block_size": 512, 00:26:18.962 "num_blocks": 65536, 00:26:18.962 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:18.962 "assigned_rate_limits": { 00:26:18.962 "rw_ios_per_sec": 0, 00:26:18.962 "rw_mbytes_per_sec": 0, 00:26:18.962 "r_mbytes_per_sec": 0, 00:26:18.962 "w_mbytes_per_sec": 0 00:26:18.962 }, 00:26:18.962 "claimed": true, 00:26:18.962 "claim_type": "exclusive_write", 00:26:18.962 "zoned": false, 00:26:18.962 "supported_io_types": { 00:26:18.962 "read": true, 00:26:18.962 "write": true, 00:26:18.962 "unmap": true, 00:26:18.962 "flush": true, 00:26:18.962 "reset": true, 00:26:18.962 "nvme_admin": false, 00:26:18.962 "nvme_io": false, 00:26:18.962 "nvme_io_md": false, 00:26:18.962 "write_zeroes": true, 00:26:18.962 "zcopy": true, 00:26:18.962 "get_zone_info": false, 00:26:18.962 "zone_management": false, 00:26:18.962 "zone_append": false, 00:26:18.962 "compare": false, 00:26:18.962 "compare_and_write": false, 00:26:18.962 "abort": true, 00:26:18.962 "seek_hole": false, 00:26:18.962 "seek_data": false, 00:26:18.962 "copy": true, 00:26:18.962 "nvme_iov_md": false 00:26:18.962 }, 00:26:18.962 "memory_domains": [ 00:26:18.962 { 00:26:18.962 "dma_device_id": "system", 00:26:18.962 "dma_device_type": 1 00:26:18.962 }, 00:26:18.962 { 00:26:18.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.962 "dma_device_type": 2 00:26:18.962 } 00:26:18.962 ], 00:26:18.962 "driver_specific": {} 00:26:18.962 } 00:26:18.962 ] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.962 "name": "Existed_Raid", 00:26:18.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.962 "strip_size_kb": 64, 00:26:18.962 "state": "configuring", 00:26:18.962 "raid_level": "raid5f", 00:26:18.962 "superblock": false, 00:26:18.962 "num_base_bdevs": 4, 00:26:18.962 "num_base_bdevs_discovered": 3, 00:26:18.962 "num_base_bdevs_operational": 4, 00:26:18.962 "base_bdevs_list": [ 00:26:18.962 { 00:26:18.962 "name": "BaseBdev1", 00:26:18.962 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:18.962 "is_configured": true, 00:26:18.962 "data_offset": 0, 00:26:18.962 "data_size": 65536 00:26:18.962 }, 00:26:18.962 { 00:26:18.962 "name": null, 00:26:18.962 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:18.962 "is_configured": false, 00:26:18.962 "data_offset": 0, 00:26:18.962 "data_size": 65536 00:26:18.962 }, 00:26:18.962 { 00:26:18.962 "name": "BaseBdev3", 00:26:18.962 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:18.962 "is_configured": true, 00:26:18.962 "data_offset": 0, 00:26:18.962 "data_size": 65536 00:26:18.962 }, 00:26:18.962 { 00:26:18.962 "name": "BaseBdev4", 00:26:18.962 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:18.962 "is_configured": true, 00:26:18.962 "data_offset": 0, 00:26:18.962 "data_size": 65536 00:26:18.962 } 00:26:18.962 ] 00:26:18.962 }' 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.962 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.224 [2024-12-09 23:07:54.442407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.224 "name": "Existed_Raid", 00:26:19.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.224 "strip_size_kb": 64, 00:26:19.224 "state": "configuring", 00:26:19.224 "raid_level": "raid5f", 00:26:19.224 "superblock": false, 00:26:19.224 "num_base_bdevs": 4, 00:26:19.224 "num_base_bdevs_discovered": 2, 00:26:19.224 "num_base_bdevs_operational": 4, 00:26:19.224 "base_bdevs_list": [ 00:26:19.224 { 00:26:19.224 "name": "BaseBdev1", 00:26:19.224 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:19.224 "is_configured": true, 00:26:19.224 "data_offset": 0, 00:26:19.224 "data_size": 65536 00:26:19.224 }, 00:26:19.224 { 00:26:19.224 "name": null, 00:26:19.224 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:19.224 "is_configured": false, 00:26:19.224 "data_offset": 0, 00:26:19.224 "data_size": 65536 00:26:19.224 }, 00:26:19.224 { 00:26:19.224 "name": null, 00:26:19.224 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:19.224 "is_configured": false, 00:26:19.224 "data_offset": 0, 00:26:19.224 "data_size": 65536 00:26:19.224 }, 00:26:19.224 { 00:26:19.224 "name": "BaseBdev4", 00:26:19.224 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:19.224 "is_configured": true, 00:26:19.224 "data_offset": 0, 00:26:19.224 "data_size": 65536 00:26:19.224 } 00:26:19.224 ] 00:26:19.224 }' 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.224 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.485 [2024-12-09 23:07:54.810475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.485 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.486 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.486 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.486 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.486 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.486 "name": "Existed_Raid", 00:26:19.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.486 "strip_size_kb": 64, 00:26:19.486 "state": "configuring", 00:26:19.486 "raid_level": "raid5f", 00:26:19.486 "superblock": false, 00:26:19.486 "num_base_bdevs": 4, 00:26:19.486 "num_base_bdevs_discovered": 3, 00:26:19.486 "num_base_bdevs_operational": 4, 00:26:19.486 "base_bdevs_list": [ 00:26:19.486 { 00:26:19.486 "name": "BaseBdev1", 00:26:19.486 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:19.486 "is_configured": true, 00:26:19.486 "data_offset": 0, 00:26:19.486 "data_size": 65536 00:26:19.486 }, 00:26:19.486 { 00:26:19.486 "name": null, 00:26:19.486 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:19.486 "is_configured": false, 00:26:19.486 "data_offset": 0, 00:26:19.486 "data_size": 65536 00:26:19.486 }, 00:26:19.486 { 00:26:19.486 "name": "BaseBdev3", 00:26:19.486 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:19.486 "is_configured": true, 00:26:19.486 "data_offset": 0, 00:26:19.486 "data_size": 65536 00:26:19.486 }, 00:26:19.486 { 00:26:19.486 "name": "BaseBdev4", 00:26:19.486 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:19.486 "is_configured": true, 00:26:19.486 "data_offset": 0, 00:26:19.486 "data_size": 65536 00:26:19.486 } 00:26:19.486 ] 00:26:19.486 }' 00:26:19.486 23:07:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.486 23:07:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.060 [2024-12-09 23:07:55.158559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.060 "name": "Existed_Raid", 00:26:20.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.060 "strip_size_kb": 64, 00:26:20.060 "state": "configuring", 00:26:20.060 "raid_level": "raid5f", 00:26:20.060 "superblock": false, 00:26:20.060 "num_base_bdevs": 4, 00:26:20.060 "num_base_bdevs_discovered": 2, 00:26:20.060 "num_base_bdevs_operational": 4, 00:26:20.060 "base_bdevs_list": [ 00:26:20.060 { 00:26:20.060 "name": null, 00:26:20.060 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:20.060 "is_configured": false, 00:26:20.060 "data_offset": 0, 00:26:20.060 "data_size": 65536 00:26:20.060 }, 00:26:20.060 { 00:26:20.060 "name": null, 00:26:20.060 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:20.060 "is_configured": false, 00:26:20.060 "data_offset": 0, 00:26:20.060 "data_size": 65536 00:26:20.060 }, 00:26:20.060 { 00:26:20.060 "name": "BaseBdev3", 00:26:20.060 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:20.060 "is_configured": true, 00:26:20.060 "data_offset": 0, 00:26:20.060 "data_size": 65536 00:26:20.060 }, 00:26:20.060 { 00:26:20.060 "name": "BaseBdev4", 00:26:20.060 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:20.060 "is_configured": true, 00:26:20.060 "data_offset": 0, 00:26:20.060 "data_size": 65536 00:26:20.060 } 00:26:20.060 ] 00:26:20.060 }' 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.060 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.323 [2024-12-09 23:07:55.561254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:20.323 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.324 "name": "Existed_Raid", 00:26:20.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.324 "strip_size_kb": 64, 00:26:20.324 "state": "configuring", 00:26:20.324 "raid_level": "raid5f", 00:26:20.324 "superblock": false, 00:26:20.324 "num_base_bdevs": 4, 00:26:20.324 "num_base_bdevs_discovered": 3, 00:26:20.324 "num_base_bdevs_operational": 4, 00:26:20.324 "base_bdevs_list": [ 00:26:20.324 { 00:26:20.324 "name": null, 00:26:20.324 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:20.324 "is_configured": false, 00:26:20.324 "data_offset": 0, 00:26:20.324 "data_size": 65536 00:26:20.324 }, 00:26:20.324 { 00:26:20.324 "name": "BaseBdev2", 00:26:20.324 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:20.324 "is_configured": true, 00:26:20.324 "data_offset": 0, 00:26:20.324 "data_size": 65536 00:26:20.324 }, 00:26:20.324 { 00:26:20.324 "name": "BaseBdev3", 00:26:20.324 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:20.324 "is_configured": true, 00:26:20.324 "data_offset": 0, 00:26:20.324 "data_size": 65536 00:26:20.324 }, 00:26:20.324 { 00:26:20.324 "name": "BaseBdev4", 00:26:20.324 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:20.324 "is_configured": true, 00:26:20.324 "data_offset": 0, 00:26:20.324 "data_size": 65536 00:26:20.324 } 00:26:20.324 ] 00:26:20.324 }' 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.324 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.586 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:20.847 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.847 23:07:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1114d6a7-20e8-47b9-9526-5bde329e8323 00:26:20.847 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.847 23:07:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.847 [2024-12-09 23:07:56.004432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:20.847 [2024-12-09 23:07:56.004471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:20.847 [2024-12-09 23:07:56.004477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:20.847 [2024-12-09 23:07:56.004694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:20.847 [2024-12-09 23:07:56.008395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:20.847 [2024-12-09 23:07:56.008425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:20.847 [2024-12-09 23:07:56.008632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.847 NewBaseBdev 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.847 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.847 [ 00:26:20.847 { 00:26:20.847 "name": "NewBaseBdev", 00:26:20.847 "aliases": [ 00:26:20.847 "1114d6a7-20e8-47b9-9526-5bde329e8323" 00:26:20.847 ], 00:26:20.847 "product_name": "Malloc disk", 00:26:20.847 "block_size": 512, 00:26:20.847 "num_blocks": 65536, 00:26:20.847 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:20.847 "assigned_rate_limits": { 00:26:20.847 "rw_ios_per_sec": 0, 00:26:20.847 "rw_mbytes_per_sec": 0, 00:26:20.848 "r_mbytes_per_sec": 0, 00:26:20.848 "w_mbytes_per_sec": 0 00:26:20.848 }, 00:26:20.848 "claimed": true, 00:26:20.848 "claim_type": "exclusive_write", 00:26:20.848 "zoned": false, 00:26:20.848 "supported_io_types": { 00:26:20.848 "read": true, 00:26:20.848 "write": true, 00:26:20.848 "unmap": true, 00:26:20.848 "flush": true, 00:26:20.848 "reset": true, 00:26:20.848 "nvme_admin": false, 00:26:20.848 "nvme_io": false, 00:26:20.848 "nvme_io_md": false, 00:26:20.848 "write_zeroes": true, 00:26:20.848 "zcopy": true, 00:26:20.848 "get_zone_info": false, 00:26:20.848 "zone_management": false, 00:26:20.848 "zone_append": false, 00:26:20.848 "compare": false, 00:26:20.848 "compare_and_write": false, 00:26:20.848 "abort": true, 00:26:20.848 "seek_hole": false, 00:26:20.848 "seek_data": false, 00:26:20.848 "copy": true, 00:26:20.848 "nvme_iov_md": false 00:26:20.848 }, 00:26:20.848 "memory_domains": [ 00:26:20.848 { 00:26:20.848 "dma_device_id": "system", 00:26:20.848 "dma_device_type": 1 00:26:20.848 }, 00:26:20.848 { 00:26:20.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.848 "dma_device_type": 2 00:26:20.848 } 00:26:20.848 ], 00:26:20.848 "driver_specific": {} 00:26:20.848 } 00:26:20.848 ] 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.848 "name": "Existed_Raid", 00:26:20.848 "uuid": "2020f2d6-6e46-4935-989e-70408c2530f4", 00:26:20.848 "strip_size_kb": 64, 00:26:20.848 "state": "online", 00:26:20.848 "raid_level": "raid5f", 00:26:20.848 "superblock": false, 00:26:20.848 "num_base_bdevs": 4, 00:26:20.848 "num_base_bdevs_discovered": 4, 00:26:20.848 "num_base_bdevs_operational": 4, 00:26:20.848 "base_bdevs_list": [ 00:26:20.848 { 00:26:20.848 "name": "NewBaseBdev", 00:26:20.848 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:20.848 "is_configured": true, 00:26:20.848 "data_offset": 0, 00:26:20.848 "data_size": 65536 00:26:20.848 }, 00:26:20.848 { 00:26:20.848 "name": "BaseBdev2", 00:26:20.848 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:20.848 "is_configured": true, 00:26:20.848 "data_offset": 0, 00:26:20.848 "data_size": 65536 00:26:20.848 }, 00:26:20.848 { 00:26:20.848 "name": "BaseBdev3", 00:26:20.848 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:20.848 "is_configured": true, 00:26:20.848 "data_offset": 0, 00:26:20.848 "data_size": 65536 00:26:20.848 }, 00:26:20.848 { 00:26:20.848 "name": "BaseBdev4", 00:26:20.848 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:20.848 "is_configured": true, 00:26:20.848 "data_offset": 0, 00:26:20.848 "data_size": 65536 00:26:20.848 } 00:26:20.848 ] 00:26:20.848 }' 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.848 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:21.110 [2024-12-09 23:07:56.345187] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:21.110 "name": "Existed_Raid", 00:26:21.110 "aliases": [ 00:26:21.110 "2020f2d6-6e46-4935-989e-70408c2530f4" 00:26:21.110 ], 00:26:21.110 "product_name": "Raid Volume", 00:26:21.110 "block_size": 512, 00:26:21.110 "num_blocks": 196608, 00:26:21.110 "uuid": "2020f2d6-6e46-4935-989e-70408c2530f4", 00:26:21.110 "assigned_rate_limits": { 00:26:21.110 "rw_ios_per_sec": 0, 00:26:21.110 "rw_mbytes_per_sec": 0, 00:26:21.110 "r_mbytes_per_sec": 0, 00:26:21.110 "w_mbytes_per_sec": 0 00:26:21.110 }, 00:26:21.110 "claimed": false, 00:26:21.110 "zoned": false, 00:26:21.110 "supported_io_types": { 00:26:21.110 "read": true, 00:26:21.110 "write": true, 00:26:21.110 "unmap": false, 00:26:21.110 "flush": false, 00:26:21.110 "reset": true, 00:26:21.110 "nvme_admin": false, 00:26:21.110 "nvme_io": false, 00:26:21.110 "nvme_io_md": false, 00:26:21.110 "write_zeroes": true, 00:26:21.110 "zcopy": false, 00:26:21.110 "get_zone_info": false, 00:26:21.110 "zone_management": false, 00:26:21.110 "zone_append": false, 00:26:21.110 "compare": false, 00:26:21.110 "compare_and_write": false, 00:26:21.110 "abort": false, 00:26:21.110 "seek_hole": false, 00:26:21.110 "seek_data": false, 00:26:21.110 "copy": false, 00:26:21.110 "nvme_iov_md": false 00:26:21.110 }, 00:26:21.110 "driver_specific": { 00:26:21.110 "raid": { 00:26:21.110 "uuid": "2020f2d6-6e46-4935-989e-70408c2530f4", 00:26:21.110 "strip_size_kb": 64, 00:26:21.110 "state": "online", 00:26:21.110 "raid_level": "raid5f", 00:26:21.110 "superblock": false, 00:26:21.110 "num_base_bdevs": 4, 00:26:21.110 "num_base_bdevs_discovered": 4, 00:26:21.110 "num_base_bdevs_operational": 4, 00:26:21.110 "base_bdevs_list": [ 00:26:21.110 { 00:26:21.110 "name": "NewBaseBdev", 00:26:21.110 "uuid": "1114d6a7-20e8-47b9-9526-5bde329e8323", 00:26:21.110 "is_configured": true, 00:26:21.110 "data_offset": 0, 00:26:21.110 "data_size": 65536 00:26:21.110 }, 00:26:21.110 { 00:26:21.110 "name": "BaseBdev2", 00:26:21.110 "uuid": "d5f1a3c6-f915-4a70-9a1b-c51d6764c96c", 00:26:21.110 "is_configured": true, 00:26:21.110 "data_offset": 0, 00:26:21.110 "data_size": 65536 00:26:21.110 }, 00:26:21.110 { 00:26:21.110 "name": "BaseBdev3", 00:26:21.110 "uuid": "646cbfed-054d-4745-9528-2a85bbc5b304", 00:26:21.110 "is_configured": true, 00:26:21.110 "data_offset": 0, 00:26:21.110 "data_size": 65536 00:26:21.110 }, 00:26:21.110 { 00:26:21.110 "name": "BaseBdev4", 00:26:21.110 "uuid": "5998a427-0a10-4f62-bdb6-7abcb35e4843", 00:26:21.110 "is_configured": true, 00:26:21.110 "data_offset": 0, 00:26:21.110 "data_size": 65536 00:26:21.110 } 00:26:21.110 ] 00:26:21.110 } 00:26:21.110 } 00:26:21.110 }' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:21.110 BaseBdev2 00:26:21.110 BaseBdev3 00:26:21.110 BaseBdev4' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.110 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.371 [2024-12-09 23:07:56.577018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:21.371 [2024-12-09 23:07:56.577043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.371 [2024-12-09 23:07:56.577123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.371 [2024-12-09 23:07:56.577369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.371 [2024-12-09 23:07:56.577382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80461 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80461 ']' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80461 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80461 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80461' 00:26:21.371 killing process with pid 80461 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80461 00:26:21.371 [2024-12-09 23:07:56.609420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:21.371 23:07:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80461 00:26:21.631 [2024-12-09 23:07:56.812269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:22.203 00:26:22.203 real 0m8.226s 00:26:22.203 user 0m13.265s 00:26:22.203 sys 0m1.378s 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.203 ************************************ 00:26:22.203 END TEST raid5f_state_function_test 00:26:22.203 ************************************ 00:26:22.203 23:07:57 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:26:22.203 23:07:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:22.203 23:07:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.203 23:07:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:22.203 ************************************ 00:26:22.203 START TEST raid5f_state_function_test_sb 00:26:22.203 ************************************ 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.203 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:22.204 Process raid pid: 81105 00:26:22.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81105 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81105' 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81105 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81105 ']' 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:22.204 23:07:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:22.204 [2024-12-09 23:07:57.519349] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:26:22.204 [2024-12-09 23:07:57.519479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.466 [2024-12-09 23:07:57.675739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.466 [2024-12-09 23:07:57.764053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.727 [2024-12-09 23:07:57.878076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:22.727 [2024-12-09 23:07:57.878120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:23.298 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.298 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:23.298 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:23.298 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.298 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.298 [2024-12-09 23:07:58.372269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:23.298 [2024-12-09 23:07:58.372315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:23.298 [2024-12-09 23:07:58.372323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:23.298 [2024-12-09 23:07:58.372331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:23.298 [2024-12-09 23:07:58.372337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:23.298 [2024-12-09 23:07:58.372344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:23.298 [2024-12-09 23:07:58.372349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:23.298 [2024-12-09 23:07:58.372355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:23.298 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.299 "name": "Existed_Raid", 00:26:23.299 "uuid": "354042a0-c346-4ae5-a560-b257a8c57a13", 00:26:23.299 "strip_size_kb": 64, 00:26:23.299 "state": "configuring", 00:26:23.299 "raid_level": "raid5f", 00:26:23.299 "superblock": true, 00:26:23.299 "num_base_bdevs": 4, 00:26:23.299 "num_base_bdevs_discovered": 0, 00:26:23.299 "num_base_bdevs_operational": 4, 00:26:23.299 "base_bdevs_list": [ 00:26:23.299 { 00:26:23.299 "name": "BaseBdev1", 00:26:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.299 "is_configured": false, 00:26:23.299 "data_offset": 0, 00:26:23.299 "data_size": 0 00:26:23.299 }, 00:26:23.299 { 00:26:23.299 "name": "BaseBdev2", 00:26:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.299 "is_configured": false, 00:26:23.299 "data_offset": 0, 00:26:23.299 "data_size": 0 00:26:23.299 }, 00:26:23.299 { 00:26:23.299 "name": "BaseBdev3", 00:26:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.299 "is_configured": false, 00:26:23.299 "data_offset": 0, 00:26:23.299 "data_size": 0 00:26:23.299 }, 00:26:23.299 { 00:26:23.299 "name": "BaseBdev4", 00:26:23.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.299 "is_configured": false, 00:26:23.299 "data_offset": 0, 00:26:23.299 "data_size": 0 00:26:23.299 } 00:26:23.299 ] 00:26:23.299 }' 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.299 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.560 [2024-12-09 23:07:58.700291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:23.560 [2024-12-09 23:07:58.700327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.560 [2024-12-09 23:07:58.708314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:23.560 [2024-12-09 23:07:58.708458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:23.560 [2024-12-09 23:07:58.708471] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:23.560 [2024-12-09 23:07:58.708479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:23.560 [2024-12-09 23:07:58.708484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:23.560 [2024-12-09 23:07:58.708491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:23.560 [2024-12-09 23:07:58.708496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:23.560 [2024-12-09 23:07:58.708504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.560 [2024-12-09 23:07:58.736648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:23.560 BaseBdev1 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.560 [ 00:26:23.560 { 00:26:23.560 "name": "BaseBdev1", 00:26:23.560 "aliases": [ 00:26:23.560 "7fd8cfc8-3e15-429e-a245-891317a3f316" 00:26:23.560 ], 00:26:23.560 "product_name": "Malloc disk", 00:26:23.560 "block_size": 512, 00:26:23.560 "num_blocks": 65536, 00:26:23.560 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:23.560 "assigned_rate_limits": { 00:26:23.560 "rw_ios_per_sec": 0, 00:26:23.560 "rw_mbytes_per_sec": 0, 00:26:23.560 "r_mbytes_per_sec": 0, 00:26:23.560 "w_mbytes_per_sec": 0 00:26:23.560 }, 00:26:23.560 "claimed": true, 00:26:23.560 "claim_type": "exclusive_write", 00:26:23.560 "zoned": false, 00:26:23.560 "supported_io_types": { 00:26:23.560 "read": true, 00:26:23.560 "write": true, 00:26:23.560 "unmap": true, 00:26:23.560 "flush": true, 00:26:23.560 "reset": true, 00:26:23.560 "nvme_admin": false, 00:26:23.560 "nvme_io": false, 00:26:23.560 "nvme_io_md": false, 00:26:23.560 "write_zeroes": true, 00:26:23.560 "zcopy": true, 00:26:23.560 "get_zone_info": false, 00:26:23.560 "zone_management": false, 00:26:23.560 "zone_append": false, 00:26:23.560 "compare": false, 00:26:23.560 "compare_and_write": false, 00:26:23.560 "abort": true, 00:26:23.560 "seek_hole": false, 00:26:23.560 "seek_data": false, 00:26:23.560 "copy": true, 00:26:23.560 "nvme_iov_md": false 00:26:23.560 }, 00:26:23.560 "memory_domains": [ 00:26:23.560 { 00:26:23.560 "dma_device_id": "system", 00:26:23.560 "dma_device_type": 1 00:26:23.560 }, 00:26:23.560 { 00:26:23.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.560 "dma_device_type": 2 00:26:23.560 } 00:26:23.560 ], 00:26:23.560 "driver_specific": {} 00:26:23.560 } 00:26:23.560 ] 00:26:23.560 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.561 "name": "Existed_Raid", 00:26:23.561 "uuid": "c28c46df-0561-42e6-bf99-6fa39136d573", 00:26:23.561 "strip_size_kb": 64, 00:26:23.561 "state": "configuring", 00:26:23.561 "raid_level": "raid5f", 00:26:23.561 "superblock": true, 00:26:23.561 "num_base_bdevs": 4, 00:26:23.561 "num_base_bdevs_discovered": 1, 00:26:23.561 "num_base_bdevs_operational": 4, 00:26:23.561 "base_bdevs_list": [ 00:26:23.561 { 00:26:23.561 "name": "BaseBdev1", 00:26:23.561 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:23.561 "is_configured": true, 00:26:23.561 "data_offset": 2048, 00:26:23.561 "data_size": 63488 00:26:23.561 }, 00:26:23.561 { 00:26:23.561 "name": "BaseBdev2", 00:26:23.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.561 "is_configured": false, 00:26:23.561 "data_offset": 0, 00:26:23.561 "data_size": 0 00:26:23.561 }, 00:26:23.561 { 00:26:23.561 "name": "BaseBdev3", 00:26:23.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.561 "is_configured": false, 00:26:23.561 "data_offset": 0, 00:26:23.561 "data_size": 0 00:26:23.561 }, 00:26:23.561 { 00:26:23.561 "name": "BaseBdev4", 00:26:23.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.561 "is_configured": false, 00:26:23.561 "data_offset": 0, 00:26:23.561 "data_size": 0 00:26:23.561 } 00:26:23.561 ] 00:26:23.561 }' 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.561 23:07:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 [2024-12-09 23:07:59.092736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:23.821 [2024-12-09 23:07:59.092776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 [2024-12-09 23:07:59.100791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:23.821 [2024-12-09 23:07:59.102322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:23.821 [2024-12-09 23:07:59.102355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:23.821 [2024-12-09 23:07:59.102363] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:23.821 [2024-12-09 23:07:59.102372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:23.821 [2024-12-09 23:07:59.102378] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:23.821 [2024-12-09 23:07:59.102386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.821 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.821 "name": "Existed_Raid", 00:26:23.821 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:23.821 "strip_size_kb": 64, 00:26:23.821 "state": "configuring", 00:26:23.821 "raid_level": "raid5f", 00:26:23.822 "superblock": true, 00:26:23.822 "num_base_bdevs": 4, 00:26:23.822 "num_base_bdevs_discovered": 1, 00:26:23.822 "num_base_bdevs_operational": 4, 00:26:23.822 "base_bdevs_list": [ 00:26:23.822 { 00:26:23.822 "name": "BaseBdev1", 00:26:23.822 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:23.822 "is_configured": true, 00:26:23.822 "data_offset": 2048, 00:26:23.822 "data_size": 63488 00:26:23.822 }, 00:26:23.822 { 00:26:23.822 "name": "BaseBdev2", 00:26:23.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.822 "is_configured": false, 00:26:23.822 "data_offset": 0, 00:26:23.822 "data_size": 0 00:26:23.822 }, 00:26:23.822 { 00:26:23.822 "name": "BaseBdev3", 00:26:23.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.822 "is_configured": false, 00:26:23.822 "data_offset": 0, 00:26:23.822 "data_size": 0 00:26:23.822 }, 00:26:23.822 { 00:26:23.822 "name": "BaseBdev4", 00:26:23.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.822 "is_configured": false, 00:26:23.822 "data_offset": 0, 00:26:23.822 "data_size": 0 00:26:23.822 } 00:26:23.822 ] 00:26:23.822 }' 00:26:23.822 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.822 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.390 [2024-12-09 23:07:59.471363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:24.390 BaseBdev2 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:24.390 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 [ 00:26:24.391 { 00:26:24.391 "name": "BaseBdev2", 00:26:24.391 "aliases": [ 00:26:24.391 "598bbdad-df71-49e5-9296-e289d1fe1ad7" 00:26:24.391 ], 00:26:24.391 "product_name": "Malloc disk", 00:26:24.391 "block_size": 512, 00:26:24.391 "num_blocks": 65536, 00:26:24.391 "uuid": "598bbdad-df71-49e5-9296-e289d1fe1ad7", 00:26:24.391 "assigned_rate_limits": { 00:26:24.391 "rw_ios_per_sec": 0, 00:26:24.391 "rw_mbytes_per_sec": 0, 00:26:24.391 "r_mbytes_per_sec": 0, 00:26:24.391 "w_mbytes_per_sec": 0 00:26:24.391 }, 00:26:24.391 "claimed": true, 00:26:24.391 "claim_type": "exclusive_write", 00:26:24.391 "zoned": false, 00:26:24.391 "supported_io_types": { 00:26:24.391 "read": true, 00:26:24.391 "write": true, 00:26:24.391 "unmap": true, 00:26:24.391 "flush": true, 00:26:24.391 "reset": true, 00:26:24.391 "nvme_admin": false, 00:26:24.391 "nvme_io": false, 00:26:24.391 "nvme_io_md": false, 00:26:24.391 "write_zeroes": true, 00:26:24.391 "zcopy": true, 00:26:24.391 "get_zone_info": false, 00:26:24.391 "zone_management": false, 00:26:24.391 "zone_append": false, 00:26:24.391 "compare": false, 00:26:24.391 "compare_and_write": false, 00:26:24.391 "abort": true, 00:26:24.391 "seek_hole": false, 00:26:24.391 "seek_data": false, 00:26:24.391 "copy": true, 00:26:24.391 "nvme_iov_md": false 00:26:24.391 }, 00:26:24.391 "memory_domains": [ 00:26:24.391 { 00:26:24.391 "dma_device_id": "system", 00:26:24.391 "dma_device_type": 1 00:26:24.391 }, 00:26:24.391 { 00:26:24.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.391 "dma_device_type": 2 00:26:24.391 } 00:26:24.391 ], 00:26:24.391 "driver_specific": {} 00:26:24.391 } 00:26:24.391 ] 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.391 "name": "Existed_Raid", 00:26:24.391 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:24.391 "strip_size_kb": 64, 00:26:24.391 "state": "configuring", 00:26:24.391 "raid_level": "raid5f", 00:26:24.391 "superblock": true, 00:26:24.391 "num_base_bdevs": 4, 00:26:24.391 "num_base_bdevs_discovered": 2, 00:26:24.391 "num_base_bdevs_operational": 4, 00:26:24.391 "base_bdevs_list": [ 00:26:24.391 { 00:26:24.391 "name": "BaseBdev1", 00:26:24.391 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:24.391 "is_configured": true, 00:26:24.391 "data_offset": 2048, 00:26:24.391 "data_size": 63488 00:26:24.391 }, 00:26:24.391 { 00:26:24.391 "name": "BaseBdev2", 00:26:24.391 "uuid": "598bbdad-df71-49e5-9296-e289d1fe1ad7", 00:26:24.391 "is_configured": true, 00:26:24.391 "data_offset": 2048, 00:26:24.391 "data_size": 63488 00:26:24.391 }, 00:26:24.391 { 00:26:24.391 "name": "BaseBdev3", 00:26:24.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.391 "is_configured": false, 00:26:24.391 "data_offset": 0, 00:26:24.391 "data_size": 0 00:26:24.391 }, 00:26:24.391 { 00:26:24.391 "name": "BaseBdev4", 00:26:24.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.391 "is_configured": false, 00:26:24.391 "data_offset": 0, 00:26:24.391 "data_size": 0 00:26:24.391 } 00:26:24.391 ] 00:26:24.391 }' 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.391 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.651 [2024-12-09 23:07:59.870993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:24.651 BaseBdev3 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.651 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.651 [ 00:26:24.651 { 00:26:24.651 "name": "BaseBdev3", 00:26:24.651 "aliases": [ 00:26:24.651 "d5b53e2b-74c6-4ba9-9fb6-db9887fe364a" 00:26:24.651 ], 00:26:24.651 "product_name": "Malloc disk", 00:26:24.651 "block_size": 512, 00:26:24.651 "num_blocks": 65536, 00:26:24.651 "uuid": "d5b53e2b-74c6-4ba9-9fb6-db9887fe364a", 00:26:24.651 "assigned_rate_limits": { 00:26:24.651 "rw_ios_per_sec": 0, 00:26:24.651 "rw_mbytes_per_sec": 0, 00:26:24.651 "r_mbytes_per_sec": 0, 00:26:24.651 "w_mbytes_per_sec": 0 00:26:24.651 }, 00:26:24.651 "claimed": true, 00:26:24.651 "claim_type": "exclusive_write", 00:26:24.651 "zoned": false, 00:26:24.651 "supported_io_types": { 00:26:24.651 "read": true, 00:26:24.651 "write": true, 00:26:24.651 "unmap": true, 00:26:24.651 "flush": true, 00:26:24.651 "reset": true, 00:26:24.651 "nvme_admin": false, 00:26:24.651 "nvme_io": false, 00:26:24.652 "nvme_io_md": false, 00:26:24.652 "write_zeroes": true, 00:26:24.652 "zcopy": true, 00:26:24.652 "get_zone_info": false, 00:26:24.652 "zone_management": false, 00:26:24.652 "zone_append": false, 00:26:24.652 "compare": false, 00:26:24.652 "compare_and_write": false, 00:26:24.652 "abort": true, 00:26:24.652 "seek_hole": false, 00:26:24.652 "seek_data": false, 00:26:24.652 "copy": true, 00:26:24.652 "nvme_iov_md": false 00:26:24.652 }, 00:26:24.652 "memory_domains": [ 00:26:24.652 { 00:26:24.652 "dma_device_id": "system", 00:26:24.652 "dma_device_type": 1 00:26:24.652 }, 00:26:24.652 { 00:26:24.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.652 "dma_device_type": 2 00:26:24.652 } 00:26:24.652 ], 00:26:24.652 "driver_specific": {} 00:26:24.652 } 00:26:24.652 ] 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.652 "name": "Existed_Raid", 00:26:24.652 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:24.652 "strip_size_kb": 64, 00:26:24.652 "state": "configuring", 00:26:24.652 "raid_level": "raid5f", 00:26:24.652 "superblock": true, 00:26:24.652 "num_base_bdevs": 4, 00:26:24.652 "num_base_bdevs_discovered": 3, 00:26:24.652 "num_base_bdevs_operational": 4, 00:26:24.652 "base_bdevs_list": [ 00:26:24.652 { 00:26:24.652 "name": "BaseBdev1", 00:26:24.652 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:24.652 "is_configured": true, 00:26:24.652 "data_offset": 2048, 00:26:24.652 "data_size": 63488 00:26:24.652 }, 00:26:24.652 { 00:26:24.652 "name": "BaseBdev2", 00:26:24.652 "uuid": "598bbdad-df71-49e5-9296-e289d1fe1ad7", 00:26:24.652 "is_configured": true, 00:26:24.652 "data_offset": 2048, 00:26:24.652 "data_size": 63488 00:26:24.652 }, 00:26:24.652 { 00:26:24.652 "name": "BaseBdev3", 00:26:24.652 "uuid": "d5b53e2b-74c6-4ba9-9fb6-db9887fe364a", 00:26:24.652 "is_configured": true, 00:26:24.652 "data_offset": 2048, 00:26:24.652 "data_size": 63488 00:26:24.652 }, 00:26:24.652 { 00:26:24.652 "name": "BaseBdev4", 00:26:24.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.652 "is_configured": false, 00:26:24.652 "data_offset": 0, 00:26:24.652 "data_size": 0 00:26:24.652 } 00:26:24.652 ] 00:26:24.652 }' 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.652 23:07:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.913 [2024-12-09 23:08:00.233466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:24.913 [2024-12-09 23:08:00.233820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:24.913 [2024-12-09 23:08:00.233898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:24.913 [2024-12-09 23:08:00.234161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:24.913 BaseBdev4 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.913 [2024-12-09 23:08:00.238249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:24.913 [2024-12-09 23:08:00.238267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:24.913 [2024-12-09 23:08:00.238458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.913 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.913 [ 00:26:24.913 { 00:26:24.913 "name": "BaseBdev4", 00:26:24.913 "aliases": [ 00:26:24.913 "f7b26333-c87b-4cd6-9e12-4c49f450f419" 00:26:24.913 ], 00:26:24.913 "product_name": "Malloc disk", 00:26:24.913 "block_size": 512, 00:26:24.913 "num_blocks": 65536, 00:26:24.913 "uuid": "f7b26333-c87b-4cd6-9e12-4c49f450f419", 00:26:24.913 "assigned_rate_limits": { 00:26:24.913 "rw_ios_per_sec": 0, 00:26:24.913 "rw_mbytes_per_sec": 0, 00:26:24.913 "r_mbytes_per_sec": 0, 00:26:24.913 "w_mbytes_per_sec": 0 00:26:24.913 }, 00:26:24.913 "claimed": true, 00:26:24.913 "claim_type": "exclusive_write", 00:26:24.913 "zoned": false, 00:26:24.913 "supported_io_types": { 00:26:24.913 "read": true, 00:26:24.913 "write": true, 00:26:24.914 "unmap": true, 00:26:24.914 "flush": true, 00:26:24.914 "reset": true, 00:26:24.914 "nvme_admin": false, 00:26:24.914 "nvme_io": false, 00:26:24.914 "nvme_io_md": false, 00:26:24.914 "write_zeroes": true, 00:26:24.914 "zcopy": true, 00:26:24.914 "get_zone_info": false, 00:26:24.914 "zone_management": false, 00:26:24.914 "zone_append": false, 00:26:24.914 "compare": false, 00:26:24.914 "compare_and_write": false, 00:26:24.914 "abort": true, 00:26:24.914 "seek_hole": false, 00:26:24.914 "seek_data": false, 00:26:24.914 "copy": true, 00:26:24.914 "nvme_iov_md": false 00:26:24.914 }, 00:26:24.914 "memory_domains": [ 00:26:24.914 { 00:26:24.914 "dma_device_id": "system", 00:26:24.914 "dma_device_type": 1 00:26:24.914 }, 00:26:24.914 { 00:26:24.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.914 "dma_device_type": 2 00:26:24.914 } 00:26:24.914 ], 00:26:24.914 "driver_specific": {} 00:26:24.914 } 00:26:24.914 ] 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.914 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.174 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.174 "name": "Existed_Raid", 00:26:25.174 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:25.174 "strip_size_kb": 64, 00:26:25.174 "state": "online", 00:26:25.174 "raid_level": "raid5f", 00:26:25.174 "superblock": true, 00:26:25.174 "num_base_bdevs": 4, 00:26:25.174 "num_base_bdevs_discovered": 4, 00:26:25.174 "num_base_bdevs_operational": 4, 00:26:25.174 "base_bdevs_list": [ 00:26:25.174 { 00:26:25.174 "name": "BaseBdev1", 00:26:25.174 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:25.174 "is_configured": true, 00:26:25.174 "data_offset": 2048, 00:26:25.174 "data_size": 63488 00:26:25.174 }, 00:26:25.174 { 00:26:25.174 "name": "BaseBdev2", 00:26:25.174 "uuid": "598bbdad-df71-49e5-9296-e289d1fe1ad7", 00:26:25.174 "is_configured": true, 00:26:25.174 "data_offset": 2048, 00:26:25.174 "data_size": 63488 00:26:25.174 }, 00:26:25.174 { 00:26:25.174 "name": "BaseBdev3", 00:26:25.174 "uuid": "d5b53e2b-74c6-4ba9-9fb6-db9887fe364a", 00:26:25.174 "is_configured": true, 00:26:25.174 "data_offset": 2048, 00:26:25.174 "data_size": 63488 00:26:25.174 }, 00:26:25.174 { 00:26:25.174 "name": "BaseBdev4", 00:26:25.174 "uuid": "f7b26333-c87b-4cd6-9e12-4c49f450f419", 00:26:25.174 "is_configured": true, 00:26:25.174 "data_offset": 2048, 00:26:25.174 "data_size": 63488 00:26:25.174 } 00:26:25.174 ] 00:26:25.174 }' 00:26:25.174 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.174 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:25.434 [2024-12-09 23:08:00.567020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:25.434 "name": "Existed_Raid", 00:26:25.434 "aliases": [ 00:26:25.434 "481fbf55-24a4-4a0a-baa9-2bce969e8ede" 00:26:25.434 ], 00:26:25.434 "product_name": "Raid Volume", 00:26:25.434 "block_size": 512, 00:26:25.434 "num_blocks": 190464, 00:26:25.434 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:25.434 "assigned_rate_limits": { 00:26:25.434 "rw_ios_per_sec": 0, 00:26:25.434 "rw_mbytes_per_sec": 0, 00:26:25.434 "r_mbytes_per_sec": 0, 00:26:25.434 "w_mbytes_per_sec": 0 00:26:25.434 }, 00:26:25.434 "claimed": false, 00:26:25.434 "zoned": false, 00:26:25.434 "supported_io_types": { 00:26:25.434 "read": true, 00:26:25.434 "write": true, 00:26:25.434 "unmap": false, 00:26:25.434 "flush": false, 00:26:25.434 "reset": true, 00:26:25.434 "nvme_admin": false, 00:26:25.434 "nvme_io": false, 00:26:25.434 "nvme_io_md": false, 00:26:25.434 "write_zeroes": true, 00:26:25.434 "zcopy": false, 00:26:25.434 "get_zone_info": false, 00:26:25.434 "zone_management": false, 00:26:25.434 "zone_append": false, 00:26:25.434 "compare": false, 00:26:25.434 "compare_and_write": false, 00:26:25.434 "abort": false, 00:26:25.434 "seek_hole": false, 00:26:25.434 "seek_data": false, 00:26:25.434 "copy": false, 00:26:25.434 "nvme_iov_md": false 00:26:25.434 }, 00:26:25.434 "driver_specific": { 00:26:25.434 "raid": { 00:26:25.434 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:25.434 "strip_size_kb": 64, 00:26:25.434 "state": "online", 00:26:25.434 "raid_level": "raid5f", 00:26:25.434 "superblock": true, 00:26:25.434 "num_base_bdevs": 4, 00:26:25.434 "num_base_bdevs_discovered": 4, 00:26:25.434 "num_base_bdevs_operational": 4, 00:26:25.434 "base_bdevs_list": [ 00:26:25.434 { 00:26:25.434 "name": "BaseBdev1", 00:26:25.434 "uuid": "7fd8cfc8-3e15-429e-a245-891317a3f316", 00:26:25.434 "is_configured": true, 00:26:25.434 "data_offset": 2048, 00:26:25.434 "data_size": 63488 00:26:25.434 }, 00:26:25.434 { 00:26:25.434 "name": "BaseBdev2", 00:26:25.434 "uuid": "598bbdad-df71-49e5-9296-e289d1fe1ad7", 00:26:25.434 "is_configured": true, 00:26:25.434 "data_offset": 2048, 00:26:25.434 "data_size": 63488 00:26:25.434 }, 00:26:25.434 { 00:26:25.434 "name": "BaseBdev3", 00:26:25.434 "uuid": "d5b53e2b-74c6-4ba9-9fb6-db9887fe364a", 00:26:25.434 "is_configured": true, 00:26:25.434 "data_offset": 2048, 00:26:25.434 "data_size": 63488 00:26:25.434 }, 00:26:25.434 { 00:26:25.434 "name": "BaseBdev4", 00:26:25.434 "uuid": "f7b26333-c87b-4cd6-9e12-4c49f450f419", 00:26:25.434 "is_configured": true, 00:26:25.434 "data_offset": 2048, 00:26:25.434 "data_size": 63488 00:26:25.434 } 00:26:25.434 ] 00:26:25.434 } 00:26:25.434 } 00:26:25.434 }' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:25.434 BaseBdev2 00:26:25.434 BaseBdev3 00:26:25.434 BaseBdev4' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.434 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.434 [2024-12-09 23:08:00.790944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.695 "name": "Existed_Raid", 00:26:25.695 "uuid": "481fbf55-24a4-4a0a-baa9-2bce969e8ede", 00:26:25.695 "strip_size_kb": 64, 00:26:25.695 "state": "online", 00:26:25.695 "raid_level": "raid5f", 00:26:25.695 "superblock": true, 00:26:25.695 "num_base_bdevs": 4, 00:26:25.695 "num_base_bdevs_discovered": 3, 00:26:25.695 "num_base_bdevs_operational": 3, 00:26:25.695 "base_bdevs_list": [ 00:26:25.695 { 00:26:25.695 "name": null, 00:26:25.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.695 "is_configured": false, 00:26:25.695 "data_offset": 0, 00:26:25.695 "data_size": 63488 00:26:25.695 }, 00:26:25.695 { 00:26:25.695 "name": "BaseBdev2", 00:26:25.695 "uuid": "598bbdad-df71-49e5-9296-e289d1fe1ad7", 00:26:25.695 "is_configured": true, 00:26:25.695 "data_offset": 2048, 00:26:25.695 "data_size": 63488 00:26:25.695 }, 00:26:25.695 { 00:26:25.695 "name": "BaseBdev3", 00:26:25.695 "uuid": "d5b53e2b-74c6-4ba9-9fb6-db9887fe364a", 00:26:25.695 "is_configured": true, 00:26:25.695 "data_offset": 2048, 00:26:25.695 "data_size": 63488 00:26:25.695 }, 00:26:25.695 { 00:26:25.695 "name": "BaseBdev4", 00:26:25.695 "uuid": "f7b26333-c87b-4cd6-9e12-4c49f450f419", 00:26:25.695 "is_configured": true, 00:26:25.695 "data_offset": 2048, 00:26:25.695 "data_size": 63488 00:26:25.695 } 00:26:25.695 ] 00:26:25.695 }' 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.695 23:08:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.956 [2024-12-09 23:08:01.202342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:25.956 [2024-12-09 23:08:01.202474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:25.956 [2024-12-09 23:08:01.248479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.956 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.956 [2024-12-09 23:08:01.288554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.217 [2024-12-09 23:08:01.375493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:26.217 [2024-12-09 23:08:01.375617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.217 BaseBdev2 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:26.217 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.218 [ 00:26:26.218 { 00:26:26.218 "name": "BaseBdev2", 00:26:26.218 "aliases": [ 00:26:26.218 "24da1e5c-014d-469b-8323-0f131fb83247" 00:26:26.218 ], 00:26:26.218 "product_name": "Malloc disk", 00:26:26.218 "block_size": 512, 00:26:26.218 "num_blocks": 65536, 00:26:26.218 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:26.218 "assigned_rate_limits": { 00:26:26.218 "rw_ios_per_sec": 0, 00:26:26.218 "rw_mbytes_per_sec": 0, 00:26:26.218 "r_mbytes_per_sec": 0, 00:26:26.218 "w_mbytes_per_sec": 0 00:26:26.218 }, 00:26:26.218 "claimed": false, 00:26:26.218 "zoned": false, 00:26:26.218 "supported_io_types": { 00:26:26.218 "read": true, 00:26:26.218 "write": true, 00:26:26.218 "unmap": true, 00:26:26.218 "flush": true, 00:26:26.218 "reset": true, 00:26:26.218 "nvme_admin": false, 00:26:26.218 "nvme_io": false, 00:26:26.218 "nvme_io_md": false, 00:26:26.218 "write_zeroes": true, 00:26:26.218 "zcopy": true, 00:26:26.218 "get_zone_info": false, 00:26:26.218 "zone_management": false, 00:26:26.218 "zone_append": false, 00:26:26.218 "compare": false, 00:26:26.218 "compare_and_write": false, 00:26:26.218 "abort": true, 00:26:26.218 "seek_hole": false, 00:26:26.218 "seek_data": false, 00:26:26.218 "copy": true, 00:26:26.218 "nvme_iov_md": false 00:26:26.218 }, 00:26:26.218 "memory_domains": [ 00:26:26.218 { 00:26:26.218 "dma_device_id": "system", 00:26:26.218 "dma_device_type": 1 00:26:26.218 }, 00:26:26.218 { 00:26:26.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.218 "dma_device_type": 2 00:26:26.218 } 00:26:26.218 ], 00:26:26.218 "driver_specific": {} 00:26:26.218 } 00:26:26.218 ] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.218 BaseBdev3 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.218 [ 00:26:26.218 { 00:26:26.218 "name": "BaseBdev3", 00:26:26.218 "aliases": [ 00:26:26.218 "4627cb0e-9232-4c94-8d0b-02a3d9343824" 00:26:26.218 ], 00:26:26.218 "product_name": "Malloc disk", 00:26:26.218 "block_size": 512, 00:26:26.218 "num_blocks": 65536, 00:26:26.218 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:26.218 "assigned_rate_limits": { 00:26:26.218 "rw_ios_per_sec": 0, 00:26:26.218 "rw_mbytes_per_sec": 0, 00:26:26.218 "r_mbytes_per_sec": 0, 00:26:26.218 "w_mbytes_per_sec": 0 00:26:26.218 }, 00:26:26.218 "claimed": false, 00:26:26.218 "zoned": false, 00:26:26.218 "supported_io_types": { 00:26:26.218 "read": true, 00:26:26.218 "write": true, 00:26:26.218 "unmap": true, 00:26:26.218 "flush": true, 00:26:26.218 "reset": true, 00:26:26.218 "nvme_admin": false, 00:26:26.218 "nvme_io": false, 00:26:26.218 "nvme_io_md": false, 00:26:26.218 "write_zeroes": true, 00:26:26.218 "zcopy": true, 00:26:26.218 "get_zone_info": false, 00:26:26.218 "zone_management": false, 00:26:26.218 "zone_append": false, 00:26:26.218 "compare": false, 00:26:26.218 "compare_and_write": false, 00:26:26.218 "abort": true, 00:26:26.218 "seek_hole": false, 00:26:26.218 "seek_data": false, 00:26:26.218 "copy": true, 00:26:26.218 "nvme_iov_md": false 00:26:26.218 }, 00:26:26.218 "memory_domains": [ 00:26:26.218 { 00:26:26.218 "dma_device_id": "system", 00:26:26.218 "dma_device_type": 1 00:26:26.218 }, 00:26:26.218 { 00:26:26.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.218 "dma_device_type": 2 00:26:26.218 } 00:26:26.218 ], 00:26:26.218 "driver_specific": {} 00:26:26.218 } 00:26:26.218 ] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.218 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.480 BaseBdev4 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.480 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.480 [ 00:26:26.480 { 00:26:26.480 "name": "BaseBdev4", 00:26:26.480 "aliases": [ 00:26:26.480 "ede8ebc4-ab47-451f-b380-16d8cdc560fb" 00:26:26.480 ], 00:26:26.480 "product_name": "Malloc disk", 00:26:26.480 "block_size": 512, 00:26:26.480 "num_blocks": 65536, 00:26:26.481 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:26.481 "assigned_rate_limits": { 00:26:26.481 "rw_ios_per_sec": 0, 00:26:26.481 "rw_mbytes_per_sec": 0, 00:26:26.481 "r_mbytes_per_sec": 0, 00:26:26.481 "w_mbytes_per_sec": 0 00:26:26.481 }, 00:26:26.481 "claimed": false, 00:26:26.481 "zoned": false, 00:26:26.481 "supported_io_types": { 00:26:26.481 "read": true, 00:26:26.481 "write": true, 00:26:26.481 "unmap": true, 00:26:26.481 "flush": true, 00:26:26.481 "reset": true, 00:26:26.481 "nvme_admin": false, 00:26:26.481 "nvme_io": false, 00:26:26.481 "nvme_io_md": false, 00:26:26.481 "write_zeroes": true, 00:26:26.481 "zcopy": true, 00:26:26.481 "get_zone_info": false, 00:26:26.481 "zone_management": false, 00:26:26.481 "zone_append": false, 00:26:26.481 "compare": false, 00:26:26.481 "compare_and_write": false, 00:26:26.481 "abort": true, 00:26:26.481 "seek_hole": false, 00:26:26.481 "seek_data": false, 00:26:26.481 "copy": true, 00:26:26.481 "nvme_iov_md": false 00:26:26.481 }, 00:26:26.481 "memory_domains": [ 00:26:26.481 { 00:26:26.481 "dma_device_id": "system", 00:26:26.481 "dma_device_type": 1 00:26:26.481 }, 00:26:26.481 { 00:26:26.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.481 "dma_device_type": 2 00:26:26.481 } 00:26:26.481 ], 00:26:26.481 "driver_specific": {} 00:26:26.481 } 00:26:26.481 ] 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.481 [2024-12-09 23:08:01.611393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:26.481 [2024-12-09 23:08:01.611512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:26.481 [2024-12-09 23:08:01.611578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:26.481 [2024-12-09 23:08:01.613160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:26.481 [2024-12-09 23:08:01.613202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.481 "name": "Existed_Raid", 00:26:26.481 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:26.481 "strip_size_kb": 64, 00:26:26.481 "state": "configuring", 00:26:26.481 "raid_level": "raid5f", 00:26:26.481 "superblock": true, 00:26:26.481 "num_base_bdevs": 4, 00:26:26.481 "num_base_bdevs_discovered": 3, 00:26:26.481 "num_base_bdevs_operational": 4, 00:26:26.481 "base_bdevs_list": [ 00:26:26.481 { 00:26:26.481 "name": "BaseBdev1", 00:26:26.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.481 "is_configured": false, 00:26:26.481 "data_offset": 0, 00:26:26.481 "data_size": 0 00:26:26.481 }, 00:26:26.481 { 00:26:26.481 "name": "BaseBdev2", 00:26:26.481 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:26.481 "is_configured": true, 00:26:26.481 "data_offset": 2048, 00:26:26.481 "data_size": 63488 00:26:26.481 }, 00:26:26.481 { 00:26:26.481 "name": "BaseBdev3", 00:26:26.481 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:26.481 "is_configured": true, 00:26:26.481 "data_offset": 2048, 00:26:26.481 "data_size": 63488 00:26:26.481 }, 00:26:26.481 { 00:26:26.481 "name": "BaseBdev4", 00:26:26.481 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:26.481 "is_configured": true, 00:26:26.481 "data_offset": 2048, 00:26:26.481 "data_size": 63488 00:26:26.481 } 00:26:26.481 ] 00:26:26.481 }' 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.481 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.743 [2024-12-09 23:08:01.935442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.743 "name": "Existed_Raid", 00:26:26.743 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:26.743 "strip_size_kb": 64, 00:26:26.743 "state": "configuring", 00:26:26.743 "raid_level": "raid5f", 00:26:26.743 "superblock": true, 00:26:26.743 "num_base_bdevs": 4, 00:26:26.743 "num_base_bdevs_discovered": 2, 00:26:26.743 "num_base_bdevs_operational": 4, 00:26:26.743 "base_bdevs_list": [ 00:26:26.743 { 00:26:26.743 "name": "BaseBdev1", 00:26:26.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.743 "is_configured": false, 00:26:26.743 "data_offset": 0, 00:26:26.743 "data_size": 0 00:26:26.743 }, 00:26:26.743 { 00:26:26.743 "name": null, 00:26:26.743 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:26.743 "is_configured": false, 00:26:26.743 "data_offset": 0, 00:26:26.743 "data_size": 63488 00:26:26.743 }, 00:26:26.743 { 00:26:26.743 "name": "BaseBdev3", 00:26:26.743 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:26.743 "is_configured": true, 00:26:26.743 "data_offset": 2048, 00:26:26.743 "data_size": 63488 00:26:26.743 }, 00:26:26.743 { 00:26:26.743 "name": "BaseBdev4", 00:26:26.743 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:26.743 "is_configured": true, 00:26:26.743 "data_offset": 2048, 00:26:26.743 "data_size": 63488 00:26:26.743 } 00:26:26.743 ] 00:26:26.743 }' 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.743 23:08:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.005 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.006 [2024-12-09 23:08:02.337667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:27.006 BaseBdev1 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.006 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.006 [ 00:26:27.006 { 00:26:27.006 "name": "BaseBdev1", 00:26:27.006 "aliases": [ 00:26:27.006 "6325f51d-dd2e-4609-a2be-1c72175722d6" 00:26:27.006 ], 00:26:27.006 "product_name": "Malloc disk", 00:26:27.006 "block_size": 512, 00:26:27.006 "num_blocks": 65536, 00:26:27.006 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:27.006 "assigned_rate_limits": { 00:26:27.006 "rw_ios_per_sec": 0, 00:26:27.006 "rw_mbytes_per_sec": 0, 00:26:27.006 "r_mbytes_per_sec": 0, 00:26:27.006 "w_mbytes_per_sec": 0 00:26:27.006 }, 00:26:27.006 "claimed": true, 00:26:27.006 "claim_type": "exclusive_write", 00:26:27.006 "zoned": false, 00:26:27.006 "supported_io_types": { 00:26:27.006 "read": true, 00:26:27.006 "write": true, 00:26:27.006 "unmap": true, 00:26:27.006 "flush": true, 00:26:27.006 "reset": true, 00:26:27.006 "nvme_admin": false, 00:26:27.006 "nvme_io": false, 00:26:27.006 "nvme_io_md": false, 00:26:27.006 "write_zeroes": true, 00:26:27.006 "zcopy": true, 00:26:27.006 "get_zone_info": false, 00:26:27.006 "zone_management": false, 00:26:27.006 "zone_append": false, 00:26:27.006 "compare": false, 00:26:27.006 "compare_and_write": false, 00:26:27.006 "abort": true, 00:26:27.006 "seek_hole": false, 00:26:27.006 "seek_data": false, 00:26:27.006 "copy": true, 00:26:27.006 "nvme_iov_md": false 00:26:27.006 }, 00:26:27.006 "memory_domains": [ 00:26:27.006 { 00:26:27.006 "dma_device_id": "system", 00:26:27.006 "dma_device_type": 1 00:26:27.006 }, 00:26:27.006 { 00:26:27.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.267 "dma_device_type": 2 00:26:27.267 } 00:26:27.267 ], 00:26:27.267 "driver_specific": {} 00:26:27.267 } 00:26:27.267 ] 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.267 "name": "Existed_Raid", 00:26:27.267 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:27.267 "strip_size_kb": 64, 00:26:27.267 "state": "configuring", 00:26:27.267 "raid_level": "raid5f", 00:26:27.267 "superblock": true, 00:26:27.267 "num_base_bdevs": 4, 00:26:27.267 "num_base_bdevs_discovered": 3, 00:26:27.267 "num_base_bdevs_operational": 4, 00:26:27.267 "base_bdevs_list": [ 00:26:27.267 { 00:26:27.267 "name": "BaseBdev1", 00:26:27.267 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:27.267 "is_configured": true, 00:26:27.267 "data_offset": 2048, 00:26:27.267 "data_size": 63488 00:26:27.267 }, 00:26:27.267 { 00:26:27.267 "name": null, 00:26:27.267 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:27.267 "is_configured": false, 00:26:27.267 "data_offset": 0, 00:26:27.267 "data_size": 63488 00:26:27.267 }, 00:26:27.267 { 00:26:27.267 "name": "BaseBdev3", 00:26:27.267 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:27.267 "is_configured": true, 00:26:27.267 "data_offset": 2048, 00:26:27.267 "data_size": 63488 00:26:27.267 }, 00:26:27.267 { 00:26:27.267 "name": "BaseBdev4", 00:26:27.267 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:27.267 "is_configured": true, 00:26:27.267 "data_offset": 2048, 00:26:27.267 "data_size": 63488 00:26:27.267 } 00:26:27.267 ] 00:26:27.267 }' 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.267 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.529 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.530 [2024-12-09 23:08:02.729808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.530 "name": "Existed_Raid", 00:26:27.530 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:27.530 "strip_size_kb": 64, 00:26:27.530 "state": "configuring", 00:26:27.530 "raid_level": "raid5f", 00:26:27.530 "superblock": true, 00:26:27.530 "num_base_bdevs": 4, 00:26:27.530 "num_base_bdevs_discovered": 2, 00:26:27.530 "num_base_bdevs_operational": 4, 00:26:27.530 "base_bdevs_list": [ 00:26:27.530 { 00:26:27.530 "name": "BaseBdev1", 00:26:27.530 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:27.530 "is_configured": true, 00:26:27.530 "data_offset": 2048, 00:26:27.530 "data_size": 63488 00:26:27.530 }, 00:26:27.530 { 00:26:27.530 "name": null, 00:26:27.530 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:27.530 "is_configured": false, 00:26:27.530 "data_offset": 0, 00:26:27.530 "data_size": 63488 00:26:27.530 }, 00:26:27.530 { 00:26:27.530 "name": null, 00:26:27.530 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:27.530 "is_configured": false, 00:26:27.530 "data_offset": 0, 00:26:27.530 "data_size": 63488 00:26:27.530 }, 00:26:27.530 { 00:26:27.530 "name": "BaseBdev4", 00:26:27.530 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:27.530 "is_configured": true, 00:26:27.530 "data_offset": 2048, 00:26:27.530 "data_size": 63488 00:26:27.530 } 00:26:27.530 ] 00:26:27.530 }' 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.530 23:08:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.794 [2024-12-09 23:08:03.109876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.794 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.058 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.058 "name": "Existed_Raid", 00:26:28.058 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:28.058 "strip_size_kb": 64, 00:26:28.058 "state": "configuring", 00:26:28.058 "raid_level": "raid5f", 00:26:28.058 "superblock": true, 00:26:28.058 "num_base_bdevs": 4, 00:26:28.058 "num_base_bdevs_discovered": 3, 00:26:28.058 "num_base_bdevs_operational": 4, 00:26:28.058 "base_bdevs_list": [ 00:26:28.058 { 00:26:28.058 "name": "BaseBdev1", 00:26:28.058 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:28.058 "is_configured": true, 00:26:28.058 "data_offset": 2048, 00:26:28.058 "data_size": 63488 00:26:28.058 }, 00:26:28.058 { 00:26:28.058 "name": null, 00:26:28.058 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:28.058 "is_configured": false, 00:26:28.058 "data_offset": 0, 00:26:28.058 "data_size": 63488 00:26:28.058 }, 00:26:28.058 { 00:26:28.058 "name": "BaseBdev3", 00:26:28.058 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:28.058 "is_configured": true, 00:26:28.058 "data_offset": 2048, 00:26:28.058 "data_size": 63488 00:26:28.058 }, 00:26:28.058 { 00:26:28.058 "name": "BaseBdev4", 00:26:28.058 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:28.058 "is_configured": true, 00:26:28.058 "data_offset": 2048, 00:26:28.058 "data_size": 63488 00:26:28.058 } 00:26:28.058 ] 00:26:28.058 }' 00:26:28.058 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.058 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.319 [2024-12-09 23:08:03.509999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.319 "name": "Existed_Raid", 00:26:28.319 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:28.319 "strip_size_kb": 64, 00:26:28.319 "state": "configuring", 00:26:28.319 "raid_level": "raid5f", 00:26:28.319 "superblock": true, 00:26:28.319 "num_base_bdevs": 4, 00:26:28.319 "num_base_bdevs_discovered": 2, 00:26:28.319 "num_base_bdevs_operational": 4, 00:26:28.319 "base_bdevs_list": [ 00:26:28.319 { 00:26:28.319 "name": null, 00:26:28.319 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:28.319 "is_configured": false, 00:26:28.319 "data_offset": 0, 00:26:28.319 "data_size": 63488 00:26:28.319 }, 00:26:28.319 { 00:26:28.319 "name": null, 00:26:28.319 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:28.319 "is_configured": false, 00:26:28.319 "data_offset": 0, 00:26:28.319 "data_size": 63488 00:26:28.319 }, 00:26:28.319 { 00:26:28.319 "name": "BaseBdev3", 00:26:28.319 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:28.319 "is_configured": true, 00:26:28.319 "data_offset": 2048, 00:26:28.319 "data_size": 63488 00:26:28.319 }, 00:26:28.319 { 00:26:28.319 "name": "BaseBdev4", 00:26:28.319 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:28.319 "is_configured": true, 00:26:28.319 "data_offset": 2048, 00:26:28.319 "data_size": 63488 00:26:28.319 } 00:26:28.319 ] 00:26:28.319 }' 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.319 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.581 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.581 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:28.581 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.581 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.581 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.842 [2024-12-09 23:08:03.953574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.842 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.842 "name": "Existed_Raid", 00:26:28.842 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:28.842 "strip_size_kb": 64, 00:26:28.842 "state": "configuring", 00:26:28.842 "raid_level": "raid5f", 00:26:28.842 "superblock": true, 00:26:28.842 "num_base_bdevs": 4, 00:26:28.842 "num_base_bdevs_discovered": 3, 00:26:28.842 "num_base_bdevs_operational": 4, 00:26:28.842 "base_bdevs_list": [ 00:26:28.842 { 00:26:28.842 "name": null, 00:26:28.842 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:28.842 "is_configured": false, 00:26:28.842 "data_offset": 0, 00:26:28.842 "data_size": 63488 00:26:28.842 }, 00:26:28.842 { 00:26:28.842 "name": "BaseBdev2", 00:26:28.842 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:28.842 "is_configured": true, 00:26:28.842 "data_offset": 2048, 00:26:28.842 "data_size": 63488 00:26:28.842 }, 00:26:28.842 { 00:26:28.842 "name": "BaseBdev3", 00:26:28.842 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:28.842 "is_configured": true, 00:26:28.842 "data_offset": 2048, 00:26:28.842 "data_size": 63488 00:26:28.842 }, 00:26:28.842 { 00:26:28.842 "name": "BaseBdev4", 00:26:28.842 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:28.843 "is_configured": true, 00:26:28.843 "data_offset": 2048, 00:26:28.843 "data_size": 63488 00:26:28.843 } 00:26:28.843 ] 00:26:28.843 }' 00:26:28.843 23:08:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.843 23:08:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6325f51d-dd2e-4609-a2be-1c72175722d6 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 [2024-12-09 23:08:04.412247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:29.104 [2024-12-09 23:08:04.412628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:29.104 [2024-12-09 23:08:04.412705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:29.104 NewBaseBdev 00:26:29.104 [2024-12-09 23:08:04.412937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 [2024-12-09 23:08:04.416806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:29.104 [2024-12-09 23:08:04.416826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:29.104 [2024-12-09 23:08:04.417028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.104 [ 00:26:29.104 { 00:26:29.104 "name": "NewBaseBdev", 00:26:29.104 "aliases": [ 00:26:29.104 "6325f51d-dd2e-4609-a2be-1c72175722d6" 00:26:29.104 ], 00:26:29.104 "product_name": "Malloc disk", 00:26:29.104 "block_size": 512, 00:26:29.104 "num_blocks": 65536, 00:26:29.104 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:29.104 "assigned_rate_limits": { 00:26:29.104 "rw_ios_per_sec": 0, 00:26:29.104 "rw_mbytes_per_sec": 0, 00:26:29.104 "r_mbytes_per_sec": 0, 00:26:29.104 "w_mbytes_per_sec": 0 00:26:29.104 }, 00:26:29.104 "claimed": true, 00:26:29.104 "claim_type": "exclusive_write", 00:26:29.104 "zoned": false, 00:26:29.104 "supported_io_types": { 00:26:29.104 "read": true, 00:26:29.104 "write": true, 00:26:29.104 "unmap": true, 00:26:29.104 "flush": true, 00:26:29.104 "reset": true, 00:26:29.104 "nvme_admin": false, 00:26:29.104 "nvme_io": false, 00:26:29.104 "nvme_io_md": false, 00:26:29.104 "write_zeroes": true, 00:26:29.104 "zcopy": true, 00:26:29.104 "get_zone_info": false, 00:26:29.104 "zone_management": false, 00:26:29.104 "zone_append": false, 00:26:29.104 "compare": false, 00:26:29.104 "compare_and_write": false, 00:26:29.104 "abort": true, 00:26:29.104 "seek_hole": false, 00:26:29.104 "seek_data": false, 00:26:29.104 "copy": true, 00:26:29.104 "nvme_iov_md": false 00:26:29.104 }, 00:26:29.104 "memory_domains": [ 00:26:29.104 { 00:26:29.104 "dma_device_id": "system", 00:26:29.104 "dma_device_type": 1 00:26:29.104 }, 00:26:29.104 { 00:26:29.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:29.104 "dma_device_type": 2 00:26:29.104 } 00:26:29.104 ], 00:26:29.104 "driver_specific": {} 00:26:29.104 } 00:26:29.104 ] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.104 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.105 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.105 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.105 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.366 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.366 "name": "Existed_Raid", 00:26:29.366 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:29.366 "strip_size_kb": 64, 00:26:29.366 "state": "online", 00:26:29.366 "raid_level": "raid5f", 00:26:29.366 "superblock": true, 00:26:29.366 "num_base_bdevs": 4, 00:26:29.366 "num_base_bdevs_discovered": 4, 00:26:29.366 "num_base_bdevs_operational": 4, 00:26:29.366 "base_bdevs_list": [ 00:26:29.366 { 00:26:29.366 "name": "NewBaseBdev", 00:26:29.366 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:29.366 "is_configured": true, 00:26:29.366 "data_offset": 2048, 00:26:29.366 "data_size": 63488 00:26:29.366 }, 00:26:29.366 { 00:26:29.366 "name": "BaseBdev2", 00:26:29.366 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:29.366 "is_configured": true, 00:26:29.366 "data_offset": 2048, 00:26:29.366 "data_size": 63488 00:26:29.366 }, 00:26:29.366 { 00:26:29.366 "name": "BaseBdev3", 00:26:29.366 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:29.366 "is_configured": true, 00:26:29.366 "data_offset": 2048, 00:26:29.366 "data_size": 63488 00:26:29.366 }, 00:26:29.366 { 00:26:29.366 "name": "BaseBdev4", 00:26:29.366 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:29.366 "is_configured": true, 00:26:29.366 "data_offset": 2048, 00:26:29.366 "data_size": 63488 00:26:29.366 } 00:26:29.366 ] 00:26:29.366 }' 00:26:29.366 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.366 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:29.626 [2024-12-09 23:08:04.749605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:29.626 "name": "Existed_Raid", 00:26:29.626 "aliases": [ 00:26:29.626 "ec8575d9-4df7-4555-8b84-6cb6c458fb20" 00:26:29.626 ], 00:26:29.626 "product_name": "Raid Volume", 00:26:29.626 "block_size": 512, 00:26:29.626 "num_blocks": 190464, 00:26:29.626 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:29.626 "assigned_rate_limits": { 00:26:29.626 "rw_ios_per_sec": 0, 00:26:29.626 "rw_mbytes_per_sec": 0, 00:26:29.626 "r_mbytes_per_sec": 0, 00:26:29.626 "w_mbytes_per_sec": 0 00:26:29.626 }, 00:26:29.626 "claimed": false, 00:26:29.626 "zoned": false, 00:26:29.626 "supported_io_types": { 00:26:29.626 "read": true, 00:26:29.626 "write": true, 00:26:29.626 "unmap": false, 00:26:29.626 "flush": false, 00:26:29.626 "reset": true, 00:26:29.626 "nvme_admin": false, 00:26:29.626 "nvme_io": false, 00:26:29.626 "nvme_io_md": false, 00:26:29.626 "write_zeroes": true, 00:26:29.626 "zcopy": false, 00:26:29.626 "get_zone_info": false, 00:26:29.626 "zone_management": false, 00:26:29.626 "zone_append": false, 00:26:29.626 "compare": false, 00:26:29.626 "compare_and_write": false, 00:26:29.626 "abort": false, 00:26:29.626 "seek_hole": false, 00:26:29.626 "seek_data": false, 00:26:29.626 "copy": false, 00:26:29.626 "nvme_iov_md": false 00:26:29.626 }, 00:26:29.626 "driver_specific": { 00:26:29.626 "raid": { 00:26:29.626 "uuid": "ec8575d9-4df7-4555-8b84-6cb6c458fb20", 00:26:29.626 "strip_size_kb": 64, 00:26:29.626 "state": "online", 00:26:29.626 "raid_level": "raid5f", 00:26:29.626 "superblock": true, 00:26:29.626 "num_base_bdevs": 4, 00:26:29.626 "num_base_bdevs_discovered": 4, 00:26:29.626 "num_base_bdevs_operational": 4, 00:26:29.626 "base_bdevs_list": [ 00:26:29.626 { 00:26:29.626 "name": "NewBaseBdev", 00:26:29.626 "uuid": "6325f51d-dd2e-4609-a2be-1c72175722d6", 00:26:29.626 "is_configured": true, 00:26:29.626 "data_offset": 2048, 00:26:29.626 "data_size": 63488 00:26:29.626 }, 00:26:29.626 { 00:26:29.626 "name": "BaseBdev2", 00:26:29.626 "uuid": "24da1e5c-014d-469b-8323-0f131fb83247", 00:26:29.626 "is_configured": true, 00:26:29.626 "data_offset": 2048, 00:26:29.626 "data_size": 63488 00:26:29.626 }, 00:26:29.626 { 00:26:29.626 "name": "BaseBdev3", 00:26:29.626 "uuid": "4627cb0e-9232-4c94-8d0b-02a3d9343824", 00:26:29.626 "is_configured": true, 00:26:29.626 "data_offset": 2048, 00:26:29.626 "data_size": 63488 00:26:29.626 }, 00:26:29.626 { 00:26:29.626 "name": "BaseBdev4", 00:26:29.626 "uuid": "ede8ebc4-ab47-451f-b380-16d8cdc560fb", 00:26:29.626 "is_configured": true, 00:26:29.626 "data_offset": 2048, 00:26:29.626 "data_size": 63488 00:26:29.626 } 00:26:29.626 ] 00:26:29.626 } 00:26:29.626 } 00:26:29.626 }' 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:29.626 BaseBdev2 00:26:29.626 BaseBdev3 00:26:29.626 BaseBdev4' 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:29.626 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 [2024-12-09 23:08:04.977453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:29.627 [2024-12-09 23:08:04.977485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:29.627 [2024-12-09 23:08:04.977549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:29.627 [2024-12-09 23:08:04.977796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:29.627 [2024-12-09 23:08:04.977811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81105 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81105 ']' 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81105 00:26:29.627 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:29.885 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.885 23:08:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81105 00:26:29.885 killing process with pid 81105 00:26:29.885 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.885 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.885 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81105' 00:26:29.885 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81105 00:26:29.885 [2024-12-09 23:08:05.008955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:29.885 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81105 00:26:29.886 [2024-12-09 23:08:05.209027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:30.456 23:08:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:30.456 00:26:30.456 real 0m8.349s 00:26:30.456 user 0m13.520s 00:26:30.456 sys 0m1.407s 00:26:30.456 ************************************ 00:26:30.456 END TEST raid5f_state_function_test_sb 00:26:30.456 ************************************ 00:26:30.456 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.456 23:08:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.716 23:08:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:26:30.716 23:08:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:30.716 23:08:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.716 23:08:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:30.716 ************************************ 00:26:30.716 START TEST raid5f_superblock_test 00:26:30.716 ************************************ 00:26:30.716 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:26:30.716 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:26:30.716 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:26:30.716 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81737 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81737 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81737 ']' 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.717 23:08:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.717 [2024-12-09 23:08:05.907173] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:26:30.717 [2024-12-09 23:08:05.907310] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81737 ] 00:26:30.717 [2024-12-09 23:08:06.064233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.986 [2024-12-09 23:08:06.151537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.986 [2024-12-09 23:08:06.264170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.986 [2024-12-09 23:08:06.264220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 malloc1 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 [2024-12-09 23:08:06.788974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:31.559 [2024-12-09 23:08:06.789041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.559 [2024-12-09 23:08:06.789062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:31.559 [2024-12-09 23:08:06.789070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.559 [2024-12-09 23:08:06.790919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.559 [2024-12-09 23:08:06.790954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:31.559 pt1 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 malloc2 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 [2024-12-09 23:08:06.824980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:31.559 [2024-12-09 23:08:06.825037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.559 [2024-12-09 23:08:06.825061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:31.559 [2024-12-09 23:08:06.825068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.559 [2024-12-09 23:08:06.826898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.559 [2024-12-09 23:08:06.826931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:31.559 pt2 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 malloc3 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 [2024-12-09 23:08:06.870129] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:31.559 [2024-12-09 23:08:06.870189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.559 [2024-12-09 23:08:06.870210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:31.559 [2024-12-09 23:08:06.870219] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.559 [2024-12-09 23:08:06.872042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.559 [2024-12-09 23:08:06.872081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:31.559 pt3 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 malloc4 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 [2024-12-09 23:08:06.902171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:31.559 [2024-12-09 23:08:06.902227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.559 [2024-12-09 23:08:06.902242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:31.559 [2024-12-09 23:08:06.902250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.559 [2024-12-09 23:08:06.904036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.559 [2024-12-09 23:08:06.904070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:31.559 pt4 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.559 [2024-12-09 23:08:06.910223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:31.559 [2024-12-09 23:08:06.911792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:31.559 [2024-12-09 23:08:06.911867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:31.559 [2024-12-09 23:08:06.911905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:31.559 [2024-12-09 23:08:06.912069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:31.559 [2024-12-09 23:08:06.912081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:31.559 [2024-12-09 23:08:06.912318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:31.559 [2024-12-09 23:08:06.916337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:31.559 [2024-12-09 23:08:06.916363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:31.559 [2024-12-09 23:08:06.916572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:31.559 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.560 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.818 "name": "raid_bdev1", 00:26:31.818 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:31.818 "strip_size_kb": 64, 00:26:31.818 "state": "online", 00:26:31.818 "raid_level": "raid5f", 00:26:31.818 "superblock": true, 00:26:31.818 "num_base_bdevs": 4, 00:26:31.818 "num_base_bdevs_discovered": 4, 00:26:31.818 "num_base_bdevs_operational": 4, 00:26:31.818 "base_bdevs_list": [ 00:26:31.818 { 00:26:31.818 "name": "pt1", 00:26:31.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:31.818 "is_configured": true, 00:26:31.818 "data_offset": 2048, 00:26:31.818 "data_size": 63488 00:26:31.818 }, 00:26:31.818 { 00:26:31.818 "name": "pt2", 00:26:31.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:31.818 "is_configured": true, 00:26:31.818 "data_offset": 2048, 00:26:31.818 "data_size": 63488 00:26:31.818 }, 00:26:31.818 { 00:26:31.818 "name": "pt3", 00:26:31.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:31.818 "is_configured": true, 00:26:31.818 "data_offset": 2048, 00:26:31.818 "data_size": 63488 00:26:31.818 }, 00:26:31.818 { 00:26:31.818 "name": "pt4", 00:26:31.818 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:31.818 "is_configured": true, 00:26:31.818 "data_offset": 2048, 00:26:31.818 "data_size": 63488 00:26:31.818 } 00:26:31.818 ] 00:26:31.818 }' 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.818 23:08:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:32.079 [2024-12-09 23:08:07.249256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.079 "name": "raid_bdev1", 00:26:32.079 "aliases": [ 00:26:32.079 "47dee0a6-899b-4fea-ac46-7b4d0f455e18" 00:26:32.079 ], 00:26:32.079 "product_name": "Raid Volume", 00:26:32.079 "block_size": 512, 00:26:32.079 "num_blocks": 190464, 00:26:32.079 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:32.079 "assigned_rate_limits": { 00:26:32.079 "rw_ios_per_sec": 0, 00:26:32.079 "rw_mbytes_per_sec": 0, 00:26:32.079 "r_mbytes_per_sec": 0, 00:26:32.079 "w_mbytes_per_sec": 0 00:26:32.079 }, 00:26:32.079 "claimed": false, 00:26:32.079 "zoned": false, 00:26:32.079 "supported_io_types": { 00:26:32.079 "read": true, 00:26:32.079 "write": true, 00:26:32.079 "unmap": false, 00:26:32.079 "flush": false, 00:26:32.079 "reset": true, 00:26:32.079 "nvme_admin": false, 00:26:32.079 "nvme_io": false, 00:26:32.079 "nvme_io_md": false, 00:26:32.079 "write_zeroes": true, 00:26:32.079 "zcopy": false, 00:26:32.079 "get_zone_info": false, 00:26:32.079 "zone_management": false, 00:26:32.079 "zone_append": false, 00:26:32.079 "compare": false, 00:26:32.079 "compare_and_write": false, 00:26:32.079 "abort": false, 00:26:32.079 "seek_hole": false, 00:26:32.079 "seek_data": false, 00:26:32.079 "copy": false, 00:26:32.079 "nvme_iov_md": false 00:26:32.079 }, 00:26:32.079 "driver_specific": { 00:26:32.079 "raid": { 00:26:32.079 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:32.079 "strip_size_kb": 64, 00:26:32.079 "state": "online", 00:26:32.079 "raid_level": "raid5f", 00:26:32.079 "superblock": true, 00:26:32.079 "num_base_bdevs": 4, 00:26:32.079 "num_base_bdevs_discovered": 4, 00:26:32.079 "num_base_bdevs_operational": 4, 00:26:32.079 "base_bdevs_list": [ 00:26:32.079 { 00:26:32.079 "name": "pt1", 00:26:32.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.079 "is_configured": true, 00:26:32.079 "data_offset": 2048, 00:26:32.079 "data_size": 63488 00:26:32.079 }, 00:26:32.079 { 00:26:32.079 "name": "pt2", 00:26:32.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.079 "is_configured": true, 00:26:32.079 "data_offset": 2048, 00:26:32.079 "data_size": 63488 00:26:32.079 }, 00:26:32.079 { 00:26:32.079 "name": "pt3", 00:26:32.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.079 "is_configured": true, 00:26:32.079 "data_offset": 2048, 00:26:32.079 "data_size": 63488 00:26:32.079 }, 00:26:32.079 { 00:26:32.079 "name": "pt4", 00:26:32.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.079 "is_configured": true, 00:26:32.079 "data_offset": 2048, 00:26:32.079 "data_size": 63488 00:26:32.079 } 00:26:32.079 ] 00:26:32.079 } 00:26:32.079 } 00:26:32.079 }' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:32.079 pt2 00:26:32.079 pt3 00:26:32.079 pt4' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.079 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 [2024-12-09 23:08:07.485284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=47dee0a6-899b-4fea-ac46-7b4d0f455e18 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 47dee0a6-899b-4fea-ac46-7b4d0f455e18 ']' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 [2024-12-09 23:08:07.509140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.341 [2024-12-09 23:08:07.509181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.341 [2024-12-09 23:08:07.509251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.341 [2024-12-09 23:08:07.509329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:32.341 [2024-12-09 23:08:07.509342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.341 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.341 [2024-12-09 23:08:07.629197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:32.341 [2024-12-09 23:08:07.630784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:32.341 [2024-12-09 23:08:07.630831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:32.341 [2024-12-09 23:08:07.630860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:32.341 [2024-12-09 23:08:07.630901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:32.341 [2024-12-09 23:08:07.630944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:32.341 [2024-12-09 23:08:07.630961] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:32.341 [2024-12-09 23:08:07.630978] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:32.341 [2024-12-09 23:08:07.630989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.341 [2024-12-09 23:08:07.630999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:32.341 request: 00:26:32.341 { 00:26:32.341 "name": "raid_bdev1", 00:26:32.341 "raid_level": "raid5f", 00:26:32.341 "base_bdevs": [ 00:26:32.341 "malloc1", 00:26:32.342 "malloc2", 00:26:32.342 "malloc3", 00:26:32.342 "malloc4" 00:26:32.342 ], 00:26:32.342 "strip_size_kb": 64, 00:26:32.342 "superblock": false, 00:26:32.342 "method": "bdev_raid_create", 00:26:32.342 "req_id": 1 00:26:32.342 } 00:26:32.342 Got JSON-RPC error response 00:26:32.342 response: 00:26:32.342 { 00:26:32.342 "code": -17, 00:26:32.342 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:32.342 } 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.342 [2024-12-09 23:08:07.673171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:32.342 [2024-12-09 23:08:07.673232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.342 [2024-12-09 23:08:07.673246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:32.342 [2024-12-09 23:08:07.673255] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.342 [2024-12-09 23:08:07.675123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.342 [2024-12-09 23:08:07.675163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:32.342 [2024-12-09 23:08:07.675236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:32.342 [2024-12-09 23:08:07.675282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:32.342 pt1 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.342 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.602 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.602 "name": "raid_bdev1", 00:26:32.602 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:32.602 "strip_size_kb": 64, 00:26:32.602 "state": "configuring", 00:26:32.602 "raid_level": "raid5f", 00:26:32.602 "superblock": true, 00:26:32.602 "num_base_bdevs": 4, 00:26:32.602 "num_base_bdevs_discovered": 1, 00:26:32.602 "num_base_bdevs_operational": 4, 00:26:32.602 "base_bdevs_list": [ 00:26:32.602 { 00:26:32.602 "name": "pt1", 00:26:32.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.602 "is_configured": true, 00:26:32.602 "data_offset": 2048, 00:26:32.602 "data_size": 63488 00:26:32.602 }, 00:26:32.602 { 00:26:32.602 "name": null, 00:26:32.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.602 "is_configured": false, 00:26:32.602 "data_offset": 2048, 00:26:32.602 "data_size": 63488 00:26:32.602 }, 00:26:32.602 { 00:26:32.602 "name": null, 00:26:32.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.602 "is_configured": false, 00:26:32.602 "data_offset": 2048, 00:26:32.602 "data_size": 63488 00:26:32.602 }, 00:26:32.602 { 00:26:32.602 "name": null, 00:26:32.602 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.602 "is_configured": false, 00:26:32.602 "data_offset": 2048, 00:26:32.602 "data_size": 63488 00:26:32.602 } 00:26:32.602 ] 00:26:32.602 }' 00:26:32.602 23:08:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.602 23:08:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.862 [2024-12-09 23:08:08.017256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:32.862 [2024-12-09 23:08:08.017328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.862 [2024-12-09 23:08:08.017344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:32.862 [2024-12-09 23:08:08.017354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.862 [2024-12-09 23:08:08.017700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.862 [2024-12-09 23:08:08.017713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:32.862 [2024-12-09 23:08:08.017772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:32.862 [2024-12-09 23:08:08.017791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:32.862 pt2 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.862 [2024-12-09 23:08:08.025270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.862 "name": "raid_bdev1", 00:26:32.862 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:32.862 "strip_size_kb": 64, 00:26:32.862 "state": "configuring", 00:26:32.862 "raid_level": "raid5f", 00:26:32.862 "superblock": true, 00:26:32.862 "num_base_bdevs": 4, 00:26:32.862 "num_base_bdevs_discovered": 1, 00:26:32.862 "num_base_bdevs_operational": 4, 00:26:32.862 "base_bdevs_list": [ 00:26:32.862 { 00:26:32.862 "name": "pt1", 00:26:32.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.862 "is_configured": true, 00:26:32.862 "data_offset": 2048, 00:26:32.862 "data_size": 63488 00:26:32.862 }, 00:26:32.862 { 00:26:32.862 "name": null, 00:26:32.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.862 "is_configured": false, 00:26:32.862 "data_offset": 0, 00:26:32.862 "data_size": 63488 00:26:32.862 }, 00:26:32.862 { 00:26:32.862 "name": null, 00:26:32.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.862 "is_configured": false, 00:26:32.862 "data_offset": 2048, 00:26:32.862 "data_size": 63488 00:26:32.862 }, 00:26:32.862 { 00:26:32.862 "name": null, 00:26:32.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.862 "is_configured": false, 00:26:32.862 "data_offset": 2048, 00:26:32.862 "data_size": 63488 00:26:32.862 } 00:26:32.862 ] 00:26:32.862 }' 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.862 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.124 [2024-12-09 23:08:08.365324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:33.124 [2024-12-09 23:08:08.365385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.124 [2024-12-09 23:08:08.365402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:33.124 [2024-12-09 23:08:08.365410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.124 [2024-12-09 23:08:08.365759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.124 [2024-12-09 23:08:08.365770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:33.124 [2024-12-09 23:08:08.365834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:33.124 [2024-12-09 23:08:08.365850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:33.124 pt2 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.124 [2024-12-09 23:08:08.373324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:33.124 [2024-12-09 23:08:08.373374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.124 [2024-12-09 23:08:08.373392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:33.124 [2024-12-09 23:08:08.373399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.124 [2024-12-09 23:08:08.373742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.124 [2024-12-09 23:08:08.373758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:33.124 [2024-12-09 23:08:08.373821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:33.124 [2024-12-09 23:08:08.373840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:33.124 pt3 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.124 [2024-12-09 23:08:08.381305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:33.124 [2024-12-09 23:08:08.381349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.124 [2024-12-09 23:08:08.381364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:33.124 [2024-12-09 23:08:08.381371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.124 [2024-12-09 23:08:08.381722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.124 [2024-12-09 23:08:08.381737] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:33.124 [2024-12-09 23:08:08.381797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:33.124 [2024-12-09 23:08:08.381815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:33.124 [2024-12-09 23:08:08.381931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:33.124 [2024-12-09 23:08:08.381942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:33.124 [2024-12-09 23:08:08.382160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:33.124 [2024-12-09 23:08:08.386009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:33.124 [2024-12-09 23:08:08.386030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:33.124 [2024-12-09 23:08:08.386203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.124 pt4 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:33.124 "name": "raid_bdev1", 00:26:33.124 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:33.124 "strip_size_kb": 64, 00:26:33.124 "state": "online", 00:26:33.124 "raid_level": "raid5f", 00:26:33.124 "superblock": true, 00:26:33.124 "num_base_bdevs": 4, 00:26:33.124 "num_base_bdevs_discovered": 4, 00:26:33.124 "num_base_bdevs_operational": 4, 00:26:33.124 "base_bdevs_list": [ 00:26:33.124 { 00:26:33.124 "name": "pt1", 00:26:33.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:33.124 "is_configured": true, 00:26:33.124 "data_offset": 2048, 00:26:33.124 "data_size": 63488 00:26:33.124 }, 00:26:33.124 { 00:26:33.124 "name": "pt2", 00:26:33.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.124 "is_configured": true, 00:26:33.124 "data_offset": 2048, 00:26:33.124 "data_size": 63488 00:26:33.124 }, 00:26:33.124 { 00:26:33.124 "name": "pt3", 00:26:33.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.124 "is_configured": true, 00:26:33.124 "data_offset": 2048, 00:26:33.124 "data_size": 63488 00:26:33.124 }, 00:26:33.124 { 00:26:33.124 "name": "pt4", 00:26:33.124 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:33.124 "is_configured": true, 00:26:33.124 "data_offset": 2048, 00:26:33.124 "data_size": 63488 00:26:33.124 } 00:26:33.124 ] 00:26:33.124 }' 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:33.124 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.386 [2024-12-09 23:08:08.706859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:33.386 "name": "raid_bdev1", 00:26:33.386 "aliases": [ 00:26:33.386 "47dee0a6-899b-4fea-ac46-7b4d0f455e18" 00:26:33.386 ], 00:26:33.386 "product_name": "Raid Volume", 00:26:33.386 "block_size": 512, 00:26:33.386 "num_blocks": 190464, 00:26:33.386 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:33.386 "assigned_rate_limits": { 00:26:33.386 "rw_ios_per_sec": 0, 00:26:33.386 "rw_mbytes_per_sec": 0, 00:26:33.386 "r_mbytes_per_sec": 0, 00:26:33.386 "w_mbytes_per_sec": 0 00:26:33.386 }, 00:26:33.386 "claimed": false, 00:26:33.386 "zoned": false, 00:26:33.386 "supported_io_types": { 00:26:33.386 "read": true, 00:26:33.386 "write": true, 00:26:33.386 "unmap": false, 00:26:33.386 "flush": false, 00:26:33.386 "reset": true, 00:26:33.386 "nvme_admin": false, 00:26:33.386 "nvme_io": false, 00:26:33.386 "nvme_io_md": false, 00:26:33.386 "write_zeroes": true, 00:26:33.386 "zcopy": false, 00:26:33.386 "get_zone_info": false, 00:26:33.386 "zone_management": false, 00:26:33.386 "zone_append": false, 00:26:33.386 "compare": false, 00:26:33.386 "compare_and_write": false, 00:26:33.386 "abort": false, 00:26:33.386 "seek_hole": false, 00:26:33.386 "seek_data": false, 00:26:33.386 "copy": false, 00:26:33.386 "nvme_iov_md": false 00:26:33.386 }, 00:26:33.386 "driver_specific": { 00:26:33.386 "raid": { 00:26:33.386 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:33.386 "strip_size_kb": 64, 00:26:33.386 "state": "online", 00:26:33.386 "raid_level": "raid5f", 00:26:33.386 "superblock": true, 00:26:33.386 "num_base_bdevs": 4, 00:26:33.386 "num_base_bdevs_discovered": 4, 00:26:33.386 "num_base_bdevs_operational": 4, 00:26:33.386 "base_bdevs_list": [ 00:26:33.386 { 00:26:33.386 "name": "pt1", 00:26:33.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:33.386 "is_configured": true, 00:26:33.386 "data_offset": 2048, 00:26:33.386 "data_size": 63488 00:26:33.386 }, 00:26:33.386 { 00:26:33.386 "name": "pt2", 00:26:33.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.386 "is_configured": true, 00:26:33.386 "data_offset": 2048, 00:26:33.386 "data_size": 63488 00:26:33.386 }, 00:26:33.386 { 00:26:33.386 "name": "pt3", 00:26:33.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.386 "is_configured": true, 00:26:33.386 "data_offset": 2048, 00:26:33.386 "data_size": 63488 00:26:33.386 }, 00:26:33.386 { 00:26:33.386 "name": "pt4", 00:26:33.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:33.386 "is_configured": true, 00:26:33.386 "data_offset": 2048, 00:26:33.386 "data_size": 63488 00:26:33.386 } 00:26:33.386 ] 00:26:33.386 } 00:26:33.386 } 00:26:33.386 }' 00:26:33.386 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:33.683 pt2 00:26:33.683 pt3 00:26:33.683 pt4' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 [2024-12-09 23:08:08.946869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 47dee0a6-899b-4fea-ac46-7b4d0f455e18 '!=' 47dee0a6-899b-4fea-ac46-7b4d0f455e18 ']' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 [2024-12-09 23:08:08.978761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.683 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.684 23:08:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.684 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.684 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:33.684 "name": "raid_bdev1", 00:26:33.684 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:33.684 "strip_size_kb": 64, 00:26:33.684 "state": "online", 00:26:33.684 "raid_level": "raid5f", 00:26:33.684 "superblock": true, 00:26:33.684 "num_base_bdevs": 4, 00:26:33.684 "num_base_bdevs_discovered": 3, 00:26:33.684 "num_base_bdevs_operational": 3, 00:26:33.684 "base_bdevs_list": [ 00:26:33.684 { 00:26:33.684 "name": null, 00:26:33.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.684 "is_configured": false, 00:26:33.684 "data_offset": 0, 00:26:33.684 "data_size": 63488 00:26:33.684 }, 00:26:33.684 { 00:26:33.684 "name": "pt2", 00:26:33.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.684 "is_configured": true, 00:26:33.684 "data_offset": 2048, 00:26:33.684 "data_size": 63488 00:26:33.684 }, 00:26:33.684 { 00:26:33.684 "name": "pt3", 00:26:33.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.684 "is_configured": true, 00:26:33.684 "data_offset": 2048, 00:26:33.684 "data_size": 63488 00:26:33.684 }, 00:26:33.684 { 00:26:33.684 "name": "pt4", 00:26:33.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:33.684 "is_configured": true, 00:26:33.684 "data_offset": 2048, 00:26:33.684 "data_size": 63488 00:26:33.684 } 00:26:33.684 ] 00:26:33.684 }' 00:26:33.684 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:33.684 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.945 [2024-12-09 23:08:09.290764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:33.945 [2024-12-09 23:08:09.290791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:33.945 [2024-12-09 23:08:09.290847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.945 [2024-12-09 23:08:09.290909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:33.945 [2024-12-09 23:08:09.290917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.945 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.207 [2024-12-09 23:08:09.358766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:34.207 [2024-12-09 23:08:09.358808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.207 [2024-12-09 23:08:09.358822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:34.207 [2024-12-09 23:08:09.358830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.207 [2024-12-09 23:08:09.360688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.207 [2024-12-09 23:08:09.360717] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:34.207 [2024-12-09 23:08:09.360779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:34.207 [2024-12-09 23:08:09.360812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:34.207 pt2 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:34.207 "name": "raid_bdev1", 00:26:34.207 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:34.207 "strip_size_kb": 64, 00:26:34.207 "state": "configuring", 00:26:34.207 "raid_level": "raid5f", 00:26:34.207 "superblock": true, 00:26:34.207 "num_base_bdevs": 4, 00:26:34.207 "num_base_bdevs_discovered": 1, 00:26:34.207 "num_base_bdevs_operational": 3, 00:26:34.207 "base_bdevs_list": [ 00:26:34.207 { 00:26:34.207 "name": null, 00:26:34.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.207 "is_configured": false, 00:26:34.207 "data_offset": 2048, 00:26:34.207 "data_size": 63488 00:26:34.207 }, 00:26:34.207 { 00:26:34.207 "name": "pt2", 00:26:34.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.207 "is_configured": true, 00:26:34.207 "data_offset": 2048, 00:26:34.207 "data_size": 63488 00:26:34.207 }, 00:26:34.207 { 00:26:34.207 "name": null, 00:26:34.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:34.207 "is_configured": false, 00:26:34.207 "data_offset": 2048, 00:26:34.207 "data_size": 63488 00:26:34.207 }, 00:26:34.207 { 00:26:34.207 "name": null, 00:26:34.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:34.207 "is_configured": false, 00:26:34.207 "data_offset": 2048, 00:26:34.207 "data_size": 63488 00:26:34.207 } 00:26:34.207 ] 00:26:34.207 }' 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:34.207 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.471 [2024-12-09 23:08:09.670871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:34.471 [2024-12-09 23:08:09.671031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.471 [2024-12-09 23:08:09.671053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:34.471 [2024-12-09 23:08:09.671061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.471 [2024-12-09 23:08:09.671401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.471 [2024-12-09 23:08:09.671418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:34.471 [2024-12-09 23:08:09.671482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:34.471 [2024-12-09 23:08:09.671498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:34.471 pt3 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.471 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:34.471 "name": "raid_bdev1", 00:26:34.471 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:34.471 "strip_size_kb": 64, 00:26:34.471 "state": "configuring", 00:26:34.471 "raid_level": "raid5f", 00:26:34.471 "superblock": true, 00:26:34.471 "num_base_bdevs": 4, 00:26:34.471 "num_base_bdevs_discovered": 2, 00:26:34.471 "num_base_bdevs_operational": 3, 00:26:34.471 "base_bdevs_list": [ 00:26:34.471 { 00:26:34.471 "name": null, 00:26:34.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.471 "is_configured": false, 00:26:34.471 "data_offset": 2048, 00:26:34.471 "data_size": 63488 00:26:34.471 }, 00:26:34.471 { 00:26:34.471 "name": "pt2", 00:26:34.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.472 "is_configured": true, 00:26:34.472 "data_offset": 2048, 00:26:34.472 "data_size": 63488 00:26:34.472 }, 00:26:34.472 { 00:26:34.472 "name": "pt3", 00:26:34.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:34.472 "is_configured": true, 00:26:34.472 "data_offset": 2048, 00:26:34.472 "data_size": 63488 00:26:34.472 }, 00:26:34.472 { 00:26:34.472 "name": null, 00:26:34.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:34.472 "is_configured": false, 00:26:34.472 "data_offset": 2048, 00:26:34.472 "data_size": 63488 00:26:34.472 } 00:26:34.472 ] 00:26:34.472 }' 00:26:34.472 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:34.472 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.734 [2024-12-09 23:08:09.990929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:34.734 [2024-12-09 23:08:09.990975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.734 [2024-12-09 23:08:09.990990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:34.734 [2024-12-09 23:08:09.990997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.734 [2024-12-09 23:08:09.991343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.734 [2024-12-09 23:08:09.991354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:34.734 [2024-12-09 23:08:09.991413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:34.734 [2024-12-09 23:08:09.991433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:34.734 [2024-12-09 23:08:09.991534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:34.734 [2024-12-09 23:08:09.991541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:34.734 [2024-12-09 23:08:09.991736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:34.734 [2024-12-09 23:08:09.995521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:34.734 [2024-12-09 23:08:09.995542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:34.734 [2024-12-09 23:08:09.995760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.734 pt4 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.734 23:08:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.734 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.734 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:34.734 "name": "raid_bdev1", 00:26:34.734 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:34.734 "strip_size_kb": 64, 00:26:34.734 "state": "online", 00:26:34.734 "raid_level": "raid5f", 00:26:34.734 "superblock": true, 00:26:34.734 "num_base_bdevs": 4, 00:26:34.734 "num_base_bdevs_discovered": 3, 00:26:34.734 "num_base_bdevs_operational": 3, 00:26:34.734 "base_bdevs_list": [ 00:26:34.734 { 00:26:34.734 "name": null, 00:26:34.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.734 "is_configured": false, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 }, 00:26:34.734 { 00:26:34.734 "name": "pt2", 00:26:34.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.734 "is_configured": true, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 }, 00:26:34.734 { 00:26:34.734 "name": "pt3", 00:26:34.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:34.734 "is_configured": true, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 }, 00:26:34.734 { 00:26:34.734 "name": "pt4", 00:26:34.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:34.734 "is_configured": true, 00:26:34.734 "data_offset": 2048, 00:26:34.734 "data_size": 63488 00:26:34.734 } 00:26:34.734 ] 00:26:34.734 }' 00:26:34.734 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:34.734 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.995 [2024-12-09 23:08:10.292124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:34.995 [2024-12-09 23:08:10.292146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:34.995 [2024-12-09 23:08:10.292203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.995 [2024-12-09 23:08:10.292262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.995 [2024-12-09 23:08:10.292272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.995 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.995 [2024-12-09 23:08:10.344130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:34.996 [2024-12-09 23:08:10.344177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.996 [2024-12-09 23:08:10.344194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:34.996 [2024-12-09 23:08:10.344204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.996 [2024-12-09 23:08:10.346057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.996 [2024-12-09 23:08:10.346176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:34.996 [2024-12-09 23:08:10.346247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:34.996 [2024-12-09 23:08:10.346283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:34.996 [2024-12-09 23:08:10.346382] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:34.996 [2024-12-09 23:08:10.346392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:34.996 [2024-12-09 23:08:10.346404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:34.996 [2024-12-09 23:08:10.346448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:34.996 [2024-12-09 23:08:10.346525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:34.996 pt1 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.996 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.255 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.255 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.255 "name": "raid_bdev1", 00:26:35.255 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:35.255 "strip_size_kb": 64, 00:26:35.255 "state": "configuring", 00:26:35.255 "raid_level": "raid5f", 00:26:35.255 "superblock": true, 00:26:35.255 "num_base_bdevs": 4, 00:26:35.255 "num_base_bdevs_discovered": 2, 00:26:35.255 "num_base_bdevs_operational": 3, 00:26:35.255 "base_bdevs_list": [ 00:26:35.255 { 00:26:35.255 "name": null, 00:26:35.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.255 "is_configured": false, 00:26:35.255 "data_offset": 2048, 00:26:35.255 "data_size": 63488 00:26:35.255 }, 00:26:35.255 { 00:26:35.255 "name": "pt2", 00:26:35.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.255 "is_configured": true, 00:26:35.255 "data_offset": 2048, 00:26:35.255 "data_size": 63488 00:26:35.255 }, 00:26:35.255 { 00:26:35.255 "name": "pt3", 00:26:35.255 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.255 "is_configured": true, 00:26:35.255 "data_offset": 2048, 00:26:35.255 "data_size": 63488 00:26:35.255 }, 00:26:35.255 { 00:26:35.255 "name": null, 00:26:35.255 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:35.255 "is_configured": false, 00:26:35.255 "data_offset": 2048, 00:26:35.255 "data_size": 63488 00:26:35.255 } 00:26:35.255 ] 00:26:35.255 }' 00:26:35.255 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.255 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.514 [2024-12-09 23:08:10.704226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:35.514 [2024-12-09 23:08:10.704278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.514 [2024-12-09 23:08:10.704294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:35.514 [2024-12-09 23:08:10.704302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.514 [2024-12-09 23:08:10.704659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.514 [2024-12-09 23:08:10.704671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:35.514 [2024-12-09 23:08:10.704732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:35.514 [2024-12-09 23:08:10.704748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:35.514 [2024-12-09 23:08:10.704852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:35.514 [2024-12-09 23:08:10.704859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:35.514 [2024-12-09 23:08:10.705054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:35.514 [2024-12-09 23:08:10.708900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:35.514 [2024-12-09 23:08:10.708920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:35.514 [2024-12-09 23:08:10.709147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.514 pt4 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.514 "name": "raid_bdev1", 00:26:35.514 "uuid": "47dee0a6-899b-4fea-ac46-7b4d0f455e18", 00:26:35.514 "strip_size_kb": 64, 00:26:35.514 "state": "online", 00:26:35.514 "raid_level": "raid5f", 00:26:35.514 "superblock": true, 00:26:35.514 "num_base_bdevs": 4, 00:26:35.514 "num_base_bdevs_discovered": 3, 00:26:35.514 "num_base_bdevs_operational": 3, 00:26:35.514 "base_bdevs_list": [ 00:26:35.514 { 00:26:35.514 "name": null, 00:26:35.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.514 "is_configured": false, 00:26:35.514 "data_offset": 2048, 00:26:35.514 "data_size": 63488 00:26:35.514 }, 00:26:35.514 { 00:26:35.514 "name": "pt2", 00:26:35.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.514 "is_configured": true, 00:26:35.514 "data_offset": 2048, 00:26:35.514 "data_size": 63488 00:26:35.514 }, 00:26:35.514 { 00:26:35.514 "name": "pt3", 00:26:35.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.514 "is_configured": true, 00:26:35.514 "data_offset": 2048, 00:26:35.514 "data_size": 63488 00:26:35.514 }, 00:26:35.514 { 00:26:35.514 "name": "pt4", 00:26:35.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:35.514 "is_configured": true, 00:26:35.514 "data_offset": 2048, 00:26:35.514 "data_size": 63488 00:26:35.514 } 00:26:35.514 ] 00:26:35.514 }' 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.514 23:08:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.799 [2024-12-09 23:08:11.053666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 47dee0a6-899b-4fea-ac46-7b4d0f455e18 '!=' 47dee0a6-899b-4fea-ac46-7b4d0f455e18 ']' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81737 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81737 ']' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81737 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81737 00:26:35.799 killing process with pid 81737 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81737' 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81737 00:26:35.799 [2024-12-09 23:08:11.104274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:35.799 [2024-12-09 23:08:11.104343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.799 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81737 00:26:35.800 [2024-12-09 23:08:11.104406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.800 [2024-12-09 23:08:11.104418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:36.059 [2024-12-09 23:08:11.300545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:36.629 23:08:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:36.629 ************************************ 00:26:36.629 END TEST raid5f_superblock_test 00:26:36.629 ************************************ 00:26:36.629 00:26:36.629 real 0m6.042s 00:26:36.629 user 0m9.646s 00:26:36.629 sys 0m1.026s 00:26:36.629 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.629 23:08:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.629 23:08:11 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:26:36.629 23:08:11 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:26:36.629 23:08:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:36.629 23:08:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.629 23:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.629 ************************************ 00:26:36.629 START TEST raid5f_rebuild_test 00:26:36.629 ************************************ 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82201 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82201 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82201 ']' 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.629 23:08:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:36.892 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:36.892 Zero copy mechanism will not be used. 00:26:36.892 [2024-12-09 23:08:11.995058] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:26:36.892 [2024-12-09 23:08:11.995194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82201 ] 00:26:36.892 [2024-12-09 23:08:12.154341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.154 [2024-12-09 23:08:12.257936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.154 [2024-12-09 23:08:12.396762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.154 [2024-12-09 23:08:12.396948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.742 BaseBdev1_malloc 00:26:37.742 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 [2024-12-09 23:08:12.883759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:37.743 [2024-12-09 23:08:12.883942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.743 [2024-12-09 23:08:12.883970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:37.743 [2024-12-09 23:08:12.883982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.743 [2024-12-09 23:08:12.886142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.743 [2024-12-09 23:08:12.886178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:37.743 BaseBdev1 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 BaseBdev2_malloc 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 [2024-12-09 23:08:12.924068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:37.743 [2024-12-09 23:08:12.924253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.743 [2024-12-09 23:08:12.924297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:37.743 [2024-12-09 23:08:12.924465] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.743 [2024-12-09 23:08:12.926612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.743 [2024-12-09 23:08:12.926735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:37.743 BaseBdev2 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 BaseBdev3_malloc 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 [2024-12-09 23:08:12.972353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:37.743 [2024-12-09 23:08:12.972409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.743 [2024-12-09 23:08:12.972431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:37.743 [2024-12-09 23:08:12.972442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.743 [2024-12-09 23:08:12.974552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.743 [2024-12-09 23:08:12.974694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:37.743 BaseBdev3 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 BaseBdev4_malloc 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 [2024-12-09 23:08:13.008405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:37.743 [2024-12-09 23:08:13.008462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.743 [2024-12-09 23:08:13.008480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:37.743 [2024-12-09 23:08:13.008490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.743 [2024-12-09 23:08:13.010627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.743 [2024-12-09 23:08:13.010666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:37.743 BaseBdev4 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.743 spare_malloc 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:37.743 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.744 spare_delay 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.744 [2024-12-09 23:08:13.052503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:37.744 [2024-12-09 23:08:13.052682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.744 [2024-12-09 23:08:13.052705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:37.744 [2024-12-09 23:08:13.052716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.744 [2024-12-09 23:08:13.054884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.744 [2024-12-09 23:08:13.054920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:37.744 spare 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.744 [2024-12-09 23:08:13.060577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:37.744 [2024-12-09 23:08:13.062428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:37.744 [2024-12-09 23:08:13.062488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:37.744 [2024-12-09 23:08:13.062538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:37.744 [2024-12-09 23:08:13.062621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:37.744 [2024-12-09 23:08:13.062633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:37.744 [2024-12-09 23:08:13.062887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:37.744 [2024-12-09 23:08:13.067914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:37.744 [2024-12-09 23:08:13.067933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:37.744 [2024-12-09 23:08:13.068131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.744 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.008 "name": "raid_bdev1", 00:26:38.008 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:38.008 "strip_size_kb": 64, 00:26:38.008 "state": "online", 00:26:38.008 "raid_level": "raid5f", 00:26:38.008 "superblock": false, 00:26:38.008 "num_base_bdevs": 4, 00:26:38.008 "num_base_bdevs_discovered": 4, 00:26:38.008 "num_base_bdevs_operational": 4, 00:26:38.008 "base_bdevs_list": [ 00:26:38.008 { 00:26:38.008 "name": "BaseBdev1", 00:26:38.008 "uuid": "04f61106-f8c8-594c-be23-f5f284aa75f3", 00:26:38.008 "is_configured": true, 00:26:38.008 "data_offset": 0, 00:26:38.008 "data_size": 65536 00:26:38.008 }, 00:26:38.008 { 00:26:38.008 "name": "BaseBdev2", 00:26:38.008 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:38.008 "is_configured": true, 00:26:38.008 "data_offset": 0, 00:26:38.008 "data_size": 65536 00:26:38.008 }, 00:26:38.008 { 00:26:38.008 "name": "BaseBdev3", 00:26:38.008 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:38.008 "is_configured": true, 00:26:38.008 "data_offset": 0, 00:26:38.008 "data_size": 65536 00:26:38.008 }, 00:26:38.008 { 00:26:38.008 "name": "BaseBdev4", 00:26:38.008 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:38.008 "is_configured": true, 00:26:38.008 "data_offset": 0, 00:26:38.008 "data_size": 65536 00:26:38.008 } 00:26:38.008 ] 00:26:38.008 }' 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.008 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.008 [2024-12-09 23:08:13.365693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:38.270 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:38.270 [2024-12-09 23:08:13.613572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:38.529 /dev/nbd0 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.529 1+0 records in 00:26:38.529 1+0 records out 00:26:38.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261155 s, 15.7 MB/s 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:26:38.529 23:08:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:26:39.102 512+0 records in 00:26:39.102 512+0 records out 00:26:39.103 100663296 bytes (101 MB, 96 MiB) copied, 0.503391 s, 200 MB/s 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:39.103 [2024-12-09 23:08:14.362887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.103 [2024-12-09 23:08:14.388275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:39.103 "name": "raid_bdev1", 00:26:39.103 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:39.103 "strip_size_kb": 64, 00:26:39.103 "state": "online", 00:26:39.103 "raid_level": "raid5f", 00:26:39.103 "superblock": false, 00:26:39.103 "num_base_bdevs": 4, 00:26:39.103 "num_base_bdevs_discovered": 3, 00:26:39.103 "num_base_bdevs_operational": 3, 00:26:39.103 "base_bdevs_list": [ 00:26:39.103 { 00:26:39.103 "name": null, 00:26:39.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.103 "is_configured": false, 00:26:39.103 "data_offset": 0, 00:26:39.103 "data_size": 65536 00:26:39.103 }, 00:26:39.103 { 00:26:39.103 "name": "BaseBdev2", 00:26:39.103 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:39.103 "is_configured": true, 00:26:39.103 "data_offset": 0, 00:26:39.103 "data_size": 65536 00:26:39.103 }, 00:26:39.103 { 00:26:39.103 "name": "BaseBdev3", 00:26:39.103 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:39.103 "is_configured": true, 00:26:39.103 "data_offset": 0, 00:26:39.103 "data_size": 65536 00:26:39.103 }, 00:26:39.103 { 00:26:39.103 "name": "BaseBdev4", 00:26:39.103 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:39.103 "is_configured": true, 00:26:39.103 "data_offset": 0, 00:26:39.103 "data_size": 65536 00:26:39.103 } 00:26:39.103 ] 00:26:39.103 }' 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:39.103 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.363 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:39.364 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.364 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.364 [2024-12-09 23:08:14.700347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:39.364 [2024-12-09 23:08:14.710486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:26:39.364 23:08:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.364 23:08:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:39.364 [2024-12-09 23:08:14.717198] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.775 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:40.775 "name": "raid_bdev1", 00:26:40.775 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:40.775 "strip_size_kb": 64, 00:26:40.775 "state": "online", 00:26:40.775 "raid_level": "raid5f", 00:26:40.775 "superblock": false, 00:26:40.775 "num_base_bdevs": 4, 00:26:40.775 "num_base_bdevs_discovered": 4, 00:26:40.775 "num_base_bdevs_operational": 4, 00:26:40.775 "process": { 00:26:40.775 "type": "rebuild", 00:26:40.775 "target": "spare", 00:26:40.775 "progress": { 00:26:40.775 "blocks": 17280, 00:26:40.775 "percent": 8 00:26:40.775 } 00:26:40.775 }, 00:26:40.775 "base_bdevs_list": [ 00:26:40.775 { 00:26:40.775 "name": "spare", 00:26:40.775 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:40.775 "is_configured": true, 00:26:40.775 "data_offset": 0, 00:26:40.775 "data_size": 65536 00:26:40.775 }, 00:26:40.775 { 00:26:40.775 "name": "BaseBdev2", 00:26:40.775 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:40.775 "is_configured": true, 00:26:40.775 "data_offset": 0, 00:26:40.775 "data_size": 65536 00:26:40.775 }, 00:26:40.775 { 00:26:40.775 "name": "BaseBdev3", 00:26:40.775 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:40.775 "is_configured": true, 00:26:40.775 "data_offset": 0, 00:26:40.775 "data_size": 65536 00:26:40.775 }, 00:26:40.775 { 00:26:40.775 "name": "BaseBdev4", 00:26:40.775 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:40.775 "is_configured": true, 00:26:40.775 "data_offset": 0, 00:26:40.775 "data_size": 65536 00:26:40.776 } 00:26:40.776 ] 00:26:40.776 }' 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.776 [2024-12-09 23:08:15.814385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.776 [2024-12-09 23:08:15.826566] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:40.776 [2024-12-09 23:08:15.826628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.776 [2024-12-09 23:08:15.826645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.776 [2024-12-09 23:08:15.826655] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.776 "name": "raid_bdev1", 00:26:40.776 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:40.776 "strip_size_kb": 64, 00:26:40.776 "state": "online", 00:26:40.776 "raid_level": "raid5f", 00:26:40.776 "superblock": false, 00:26:40.776 "num_base_bdevs": 4, 00:26:40.776 "num_base_bdevs_discovered": 3, 00:26:40.776 "num_base_bdevs_operational": 3, 00:26:40.776 "base_bdevs_list": [ 00:26:40.776 { 00:26:40.776 "name": null, 00:26:40.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.776 "is_configured": false, 00:26:40.776 "data_offset": 0, 00:26:40.776 "data_size": 65536 00:26:40.776 }, 00:26:40.776 { 00:26:40.776 "name": "BaseBdev2", 00:26:40.776 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:40.776 "is_configured": true, 00:26:40.776 "data_offset": 0, 00:26:40.776 "data_size": 65536 00:26:40.776 }, 00:26:40.776 { 00:26:40.776 "name": "BaseBdev3", 00:26:40.776 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:40.776 "is_configured": true, 00:26:40.776 "data_offset": 0, 00:26:40.776 "data_size": 65536 00:26:40.776 }, 00:26:40.776 { 00:26:40.776 "name": "BaseBdev4", 00:26:40.776 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:40.776 "is_configured": true, 00:26:40.776 "data_offset": 0, 00:26:40.776 "data_size": 65536 00:26:40.776 } 00:26:40.776 ] 00:26:40.776 }' 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.776 23:08:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.037 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:41.037 "name": "raid_bdev1", 00:26:41.038 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:41.038 "strip_size_kb": 64, 00:26:41.038 "state": "online", 00:26:41.038 "raid_level": "raid5f", 00:26:41.038 "superblock": false, 00:26:41.038 "num_base_bdevs": 4, 00:26:41.038 "num_base_bdevs_discovered": 3, 00:26:41.038 "num_base_bdevs_operational": 3, 00:26:41.038 "base_bdevs_list": [ 00:26:41.038 { 00:26:41.038 "name": null, 00:26:41.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.038 "is_configured": false, 00:26:41.038 "data_offset": 0, 00:26:41.038 "data_size": 65536 00:26:41.038 }, 00:26:41.038 { 00:26:41.038 "name": "BaseBdev2", 00:26:41.038 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:41.038 "is_configured": true, 00:26:41.038 "data_offset": 0, 00:26:41.038 "data_size": 65536 00:26:41.038 }, 00:26:41.038 { 00:26:41.038 "name": "BaseBdev3", 00:26:41.038 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:41.038 "is_configured": true, 00:26:41.038 "data_offset": 0, 00:26:41.038 "data_size": 65536 00:26:41.038 }, 00:26:41.038 { 00:26:41.038 "name": "BaseBdev4", 00:26:41.038 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:41.038 "is_configured": true, 00:26:41.038 "data_offset": 0, 00:26:41.038 "data_size": 65536 00:26:41.038 } 00:26:41.038 ] 00:26:41.038 }' 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.038 [2024-12-09 23:08:16.242194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:41.038 [2024-12-09 23:08:16.252782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.038 23:08:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:41.038 [2024-12-09 23:08:16.259995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:41.980 "name": "raid_bdev1", 00:26:41.980 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:41.980 "strip_size_kb": 64, 00:26:41.980 "state": "online", 00:26:41.980 "raid_level": "raid5f", 00:26:41.980 "superblock": false, 00:26:41.980 "num_base_bdevs": 4, 00:26:41.980 "num_base_bdevs_discovered": 4, 00:26:41.980 "num_base_bdevs_operational": 4, 00:26:41.980 "process": { 00:26:41.980 "type": "rebuild", 00:26:41.980 "target": "spare", 00:26:41.980 "progress": { 00:26:41.980 "blocks": 17280, 00:26:41.980 "percent": 8 00:26:41.980 } 00:26:41.980 }, 00:26:41.980 "base_bdevs_list": [ 00:26:41.980 { 00:26:41.980 "name": "spare", 00:26:41.980 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:41.980 "is_configured": true, 00:26:41.980 "data_offset": 0, 00:26:41.980 "data_size": 65536 00:26:41.980 }, 00:26:41.980 { 00:26:41.980 "name": "BaseBdev2", 00:26:41.980 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:41.980 "is_configured": true, 00:26:41.980 "data_offset": 0, 00:26:41.980 "data_size": 65536 00:26:41.980 }, 00:26:41.980 { 00:26:41.980 "name": "BaseBdev3", 00:26:41.980 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:41.980 "is_configured": true, 00:26:41.980 "data_offset": 0, 00:26:41.980 "data_size": 65536 00:26:41.980 }, 00:26:41.980 { 00:26:41.980 "name": "BaseBdev4", 00:26:41.980 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:41.980 "is_configured": true, 00:26:41.980 "data_offset": 0, 00:26:41.980 "data_size": 65536 00:26:41.980 } 00:26:41.980 ] 00:26:41.980 }' 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:41.980 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.261 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:42.261 "name": "raid_bdev1", 00:26:42.261 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:42.261 "strip_size_kb": 64, 00:26:42.261 "state": "online", 00:26:42.261 "raid_level": "raid5f", 00:26:42.261 "superblock": false, 00:26:42.261 "num_base_bdevs": 4, 00:26:42.262 "num_base_bdevs_discovered": 4, 00:26:42.262 "num_base_bdevs_operational": 4, 00:26:42.262 "process": { 00:26:42.262 "type": "rebuild", 00:26:42.262 "target": "spare", 00:26:42.262 "progress": { 00:26:42.262 "blocks": 21120, 00:26:42.262 "percent": 10 00:26:42.262 } 00:26:42.262 }, 00:26:42.262 "base_bdevs_list": [ 00:26:42.262 { 00:26:42.262 "name": "spare", 00:26:42.262 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:42.262 "is_configured": true, 00:26:42.262 "data_offset": 0, 00:26:42.262 "data_size": 65536 00:26:42.262 }, 00:26:42.262 { 00:26:42.262 "name": "BaseBdev2", 00:26:42.262 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:42.262 "is_configured": true, 00:26:42.262 "data_offset": 0, 00:26:42.262 "data_size": 65536 00:26:42.262 }, 00:26:42.262 { 00:26:42.262 "name": "BaseBdev3", 00:26:42.262 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:42.262 "is_configured": true, 00:26:42.262 "data_offset": 0, 00:26:42.262 "data_size": 65536 00:26:42.262 }, 00:26:42.262 { 00:26:42.262 "name": "BaseBdev4", 00:26:42.262 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:42.262 "is_configured": true, 00:26:42.262 "data_offset": 0, 00:26:42.262 "data_size": 65536 00:26:42.262 } 00:26:42.262 ] 00:26:42.262 }' 00:26:42.262 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:42.262 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:42.262 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:42.262 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:42.262 23:08:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.208 23:08:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:43.209 "name": "raid_bdev1", 00:26:43.209 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:43.209 "strip_size_kb": 64, 00:26:43.209 "state": "online", 00:26:43.209 "raid_level": "raid5f", 00:26:43.209 "superblock": false, 00:26:43.209 "num_base_bdevs": 4, 00:26:43.209 "num_base_bdevs_discovered": 4, 00:26:43.209 "num_base_bdevs_operational": 4, 00:26:43.209 "process": { 00:26:43.209 "type": "rebuild", 00:26:43.209 "target": "spare", 00:26:43.209 "progress": { 00:26:43.209 "blocks": 40320, 00:26:43.209 "percent": 20 00:26:43.209 } 00:26:43.209 }, 00:26:43.209 "base_bdevs_list": [ 00:26:43.209 { 00:26:43.209 "name": "spare", 00:26:43.209 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:43.209 "is_configured": true, 00:26:43.209 "data_offset": 0, 00:26:43.209 "data_size": 65536 00:26:43.209 }, 00:26:43.209 { 00:26:43.209 "name": "BaseBdev2", 00:26:43.209 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:43.209 "is_configured": true, 00:26:43.209 "data_offset": 0, 00:26:43.209 "data_size": 65536 00:26:43.209 }, 00:26:43.209 { 00:26:43.209 "name": "BaseBdev3", 00:26:43.209 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:43.209 "is_configured": true, 00:26:43.209 "data_offset": 0, 00:26:43.209 "data_size": 65536 00:26:43.209 }, 00:26:43.209 { 00:26:43.209 "name": "BaseBdev4", 00:26:43.209 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:43.209 "is_configured": true, 00:26:43.209 "data_offset": 0, 00:26:43.209 "data_size": 65536 00:26:43.209 } 00:26:43.209 ] 00:26:43.209 }' 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:43.209 23:08:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:44.224 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:44.224 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:44.224 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:44.224 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:44.224 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:44.224 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:44.495 "name": "raid_bdev1", 00:26:44.495 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:44.495 "strip_size_kb": 64, 00:26:44.495 "state": "online", 00:26:44.495 "raid_level": "raid5f", 00:26:44.495 "superblock": false, 00:26:44.495 "num_base_bdevs": 4, 00:26:44.495 "num_base_bdevs_discovered": 4, 00:26:44.495 "num_base_bdevs_operational": 4, 00:26:44.495 "process": { 00:26:44.495 "type": "rebuild", 00:26:44.495 "target": "spare", 00:26:44.495 "progress": { 00:26:44.495 "blocks": 61440, 00:26:44.495 "percent": 31 00:26:44.495 } 00:26:44.495 }, 00:26:44.495 "base_bdevs_list": [ 00:26:44.495 { 00:26:44.495 "name": "spare", 00:26:44.495 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:44.495 "is_configured": true, 00:26:44.495 "data_offset": 0, 00:26:44.495 "data_size": 65536 00:26:44.495 }, 00:26:44.495 { 00:26:44.495 "name": "BaseBdev2", 00:26:44.495 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:44.495 "is_configured": true, 00:26:44.495 "data_offset": 0, 00:26:44.495 "data_size": 65536 00:26:44.495 }, 00:26:44.495 { 00:26:44.495 "name": "BaseBdev3", 00:26:44.495 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:44.495 "is_configured": true, 00:26:44.495 "data_offset": 0, 00:26:44.495 "data_size": 65536 00:26:44.495 }, 00:26:44.495 { 00:26:44.495 "name": "BaseBdev4", 00:26:44.495 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:44.495 "is_configured": true, 00:26:44.495 "data_offset": 0, 00:26:44.495 "data_size": 65536 00:26:44.495 } 00:26:44.495 ] 00:26:44.495 }' 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:44.495 23:08:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:45.461 "name": "raid_bdev1", 00:26:45.461 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:45.461 "strip_size_kb": 64, 00:26:45.461 "state": "online", 00:26:45.461 "raid_level": "raid5f", 00:26:45.461 "superblock": false, 00:26:45.461 "num_base_bdevs": 4, 00:26:45.461 "num_base_bdevs_discovered": 4, 00:26:45.461 "num_base_bdevs_operational": 4, 00:26:45.461 "process": { 00:26:45.461 "type": "rebuild", 00:26:45.461 "target": "spare", 00:26:45.461 "progress": { 00:26:45.461 "blocks": 82560, 00:26:45.461 "percent": 41 00:26:45.461 } 00:26:45.461 }, 00:26:45.461 "base_bdevs_list": [ 00:26:45.461 { 00:26:45.461 "name": "spare", 00:26:45.461 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:45.461 "is_configured": true, 00:26:45.461 "data_offset": 0, 00:26:45.461 "data_size": 65536 00:26:45.461 }, 00:26:45.461 { 00:26:45.461 "name": "BaseBdev2", 00:26:45.461 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:45.461 "is_configured": true, 00:26:45.461 "data_offset": 0, 00:26:45.461 "data_size": 65536 00:26:45.461 }, 00:26:45.461 { 00:26:45.461 "name": "BaseBdev3", 00:26:45.461 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:45.461 "is_configured": true, 00:26:45.461 "data_offset": 0, 00:26:45.461 "data_size": 65536 00:26:45.461 }, 00:26:45.461 { 00:26:45.461 "name": "BaseBdev4", 00:26:45.461 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:45.461 "is_configured": true, 00:26:45.461 "data_offset": 0, 00:26:45.461 "data_size": 65536 00:26:45.461 } 00:26:45.461 ] 00:26:45.461 }' 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:45.461 23:08:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:46.408 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:46.408 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.408 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:46.408 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:46.408 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:46.408 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:46.673 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.673 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.673 23:08:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.673 23:08:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.673 23:08:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.673 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:46.673 "name": "raid_bdev1", 00:26:46.673 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:46.673 "strip_size_kb": 64, 00:26:46.673 "state": "online", 00:26:46.673 "raid_level": "raid5f", 00:26:46.673 "superblock": false, 00:26:46.673 "num_base_bdevs": 4, 00:26:46.673 "num_base_bdevs_discovered": 4, 00:26:46.673 "num_base_bdevs_operational": 4, 00:26:46.673 "process": { 00:26:46.673 "type": "rebuild", 00:26:46.673 "target": "spare", 00:26:46.673 "progress": { 00:26:46.673 "blocks": 103680, 00:26:46.673 "percent": 52 00:26:46.673 } 00:26:46.673 }, 00:26:46.673 "base_bdevs_list": [ 00:26:46.673 { 00:26:46.673 "name": "spare", 00:26:46.673 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:46.673 "is_configured": true, 00:26:46.673 "data_offset": 0, 00:26:46.673 "data_size": 65536 00:26:46.673 }, 00:26:46.673 { 00:26:46.673 "name": "BaseBdev2", 00:26:46.673 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:46.673 "is_configured": true, 00:26:46.673 "data_offset": 0, 00:26:46.673 "data_size": 65536 00:26:46.673 }, 00:26:46.673 { 00:26:46.673 "name": "BaseBdev3", 00:26:46.673 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:46.673 "is_configured": true, 00:26:46.673 "data_offset": 0, 00:26:46.673 "data_size": 65536 00:26:46.673 }, 00:26:46.673 { 00:26:46.673 "name": "BaseBdev4", 00:26:46.673 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:46.673 "is_configured": true, 00:26:46.673 "data_offset": 0, 00:26:46.673 "data_size": 65536 00:26:46.674 } 00:26:46.674 ] 00:26:46.674 }' 00:26:46.674 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:46.674 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.674 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:46.674 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.674 23:08:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:47.713 "name": "raid_bdev1", 00:26:47.713 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:47.713 "strip_size_kb": 64, 00:26:47.713 "state": "online", 00:26:47.713 "raid_level": "raid5f", 00:26:47.713 "superblock": false, 00:26:47.713 "num_base_bdevs": 4, 00:26:47.713 "num_base_bdevs_discovered": 4, 00:26:47.713 "num_base_bdevs_operational": 4, 00:26:47.713 "process": { 00:26:47.713 "type": "rebuild", 00:26:47.713 "target": "spare", 00:26:47.713 "progress": { 00:26:47.713 "blocks": 124800, 00:26:47.713 "percent": 63 00:26:47.713 } 00:26:47.713 }, 00:26:47.713 "base_bdevs_list": [ 00:26:47.713 { 00:26:47.713 "name": "spare", 00:26:47.713 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:47.713 "is_configured": true, 00:26:47.713 "data_offset": 0, 00:26:47.713 "data_size": 65536 00:26:47.713 }, 00:26:47.713 { 00:26:47.713 "name": "BaseBdev2", 00:26:47.713 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:47.713 "is_configured": true, 00:26:47.713 "data_offset": 0, 00:26:47.713 "data_size": 65536 00:26:47.713 }, 00:26:47.713 { 00:26:47.713 "name": "BaseBdev3", 00:26:47.713 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:47.713 "is_configured": true, 00:26:47.713 "data_offset": 0, 00:26:47.713 "data_size": 65536 00:26:47.713 }, 00:26:47.713 { 00:26:47.713 "name": "BaseBdev4", 00:26:47.713 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:47.713 "is_configured": true, 00:26:47.713 "data_offset": 0, 00:26:47.713 "data_size": 65536 00:26:47.713 } 00:26:47.713 ] 00:26:47.713 }' 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:47.713 23:08:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:48.658 "name": "raid_bdev1", 00:26:48.658 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:48.658 "strip_size_kb": 64, 00:26:48.658 "state": "online", 00:26:48.658 "raid_level": "raid5f", 00:26:48.658 "superblock": false, 00:26:48.658 "num_base_bdevs": 4, 00:26:48.658 "num_base_bdevs_discovered": 4, 00:26:48.658 "num_base_bdevs_operational": 4, 00:26:48.658 "process": { 00:26:48.658 "type": "rebuild", 00:26:48.658 "target": "spare", 00:26:48.658 "progress": { 00:26:48.658 "blocks": 145920, 00:26:48.658 "percent": 74 00:26:48.658 } 00:26:48.658 }, 00:26:48.658 "base_bdevs_list": [ 00:26:48.658 { 00:26:48.658 "name": "spare", 00:26:48.658 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:48.658 "is_configured": true, 00:26:48.658 "data_offset": 0, 00:26:48.658 "data_size": 65536 00:26:48.658 }, 00:26:48.658 { 00:26:48.658 "name": "BaseBdev2", 00:26:48.658 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:48.658 "is_configured": true, 00:26:48.658 "data_offset": 0, 00:26:48.658 "data_size": 65536 00:26:48.658 }, 00:26:48.658 { 00:26:48.658 "name": "BaseBdev3", 00:26:48.658 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:48.658 "is_configured": true, 00:26:48.658 "data_offset": 0, 00:26:48.658 "data_size": 65536 00:26:48.658 }, 00:26:48.658 { 00:26:48.658 "name": "BaseBdev4", 00:26:48.658 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:48.658 "is_configured": true, 00:26:48.658 "data_offset": 0, 00:26:48.658 "data_size": 65536 00:26:48.658 } 00:26:48.658 ] 00:26:48.658 }' 00:26:48.658 23:08:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:48.920 23:08:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.920 23:08:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:48.920 23:08:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.920 23:08:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:49.868 "name": "raid_bdev1", 00:26:49.868 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:49.868 "strip_size_kb": 64, 00:26:49.868 "state": "online", 00:26:49.868 "raid_level": "raid5f", 00:26:49.868 "superblock": false, 00:26:49.868 "num_base_bdevs": 4, 00:26:49.868 "num_base_bdevs_discovered": 4, 00:26:49.868 "num_base_bdevs_operational": 4, 00:26:49.868 "process": { 00:26:49.868 "type": "rebuild", 00:26:49.868 "target": "spare", 00:26:49.868 "progress": { 00:26:49.868 "blocks": 167040, 00:26:49.868 "percent": 84 00:26:49.868 } 00:26:49.868 }, 00:26:49.868 "base_bdevs_list": [ 00:26:49.868 { 00:26:49.868 "name": "spare", 00:26:49.868 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:49.868 "is_configured": true, 00:26:49.868 "data_offset": 0, 00:26:49.868 "data_size": 65536 00:26:49.868 }, 00:26:49.868 { 00:26:49.868 "name": "BaseBdev2", 00:26:49.868 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:49.868 "is_configured": true, 00:26:49.868 "data_offset": 0, 00:26:49.868 "data_size": 65536 00:26:49.868 }, 00:26:49.868 { 00:26:49.868 "name": "BaseBdev3", 00:26:49.868 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:49.868 "is_configured": true, 00:26:49.868 "data_offset": 0, 00:26:49.868 "data_size": 65536 00:26:49.868 }, 00:26:49.868 { 00:26:49.868 "name": "BaseBdev4", 00:26:49.868 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:49.868 "is_configured": true, 00:26:49.868 "data_offset": 0, 00:26:49.868 "data_size": 65536 00:26:49.868 } 00:26:49.868 ] 00:26:49.868 }' 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.868 23:08:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:50.869 "name": "raid_bdev1", 00:26:50.869 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:50.869 "strip_size_kb": 64, 00:26:50.869 "state": "online", 00:26:50.869 "raid_level": "raid5f", 00:26:50.869 "superblock": false, 00:26:50.869 "num_base_bdevs": 4, 00:26:50.869 "num_base_bdevs_discovered": 4, 00:26:50.869 "num_base_bdevs_operational": 4, 00:26:50.869 "process": { 00:26:50.869 "type": "rebuild", 00:26:50.869 "target": "spare", 00:26:50.869 "progress": { 00:26:50.869 "blocks": 188160, 00:26:50.869 "percent": 95 00:26:50.869 } 00:26:50.869 }, 00:26:50.869 "base_bdevs_list": [ 00:26:50.869 { 00:26:50.869 "name": "spare", 00:26:50.869 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:50.869 "is_configured": true, 00:26:50.869 "data_offset": 0, 00:26:50.869 "data_size": 65536 00:26:50.869 }, 00:26:50.869 { 00:26:50.869 "name": "BaseBdev2", 00:26:50.869 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:50.869 "is_configured": true, 00:26:50.869 "data_offset": 0, 00:26:50.869 "data_size": 65536 00:26:50.869 }, 00:26:50.869 { 00:26:50.869 "name": "BaseBdev3", 00:26:50.869 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:50.869 "is_configured": true, 00:26:50.869 "data_offset": 0, 00:26:50.869 "data_size": 65536 00:26:50.869 }, 00:26:50.869 { 00:26:50.869 "name": "BaseBdev4", 00:26:50.869 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:50.869 "is_configured": true, 00:26:50.869 "data_offset": 0, 00:26:50.869 "data_size": 65536 00:26:50.869 } 00:26:50.869 ] 00:26:50.869 }' 00:26:50.869 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:51.129 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.129 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:51.129 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.129 23:08:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:51.391 [2024-12-09 23:08:26.633486] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:51.391 [2024-12-09 23:08:26.633558] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:51.391 [2024-12-09 23:08:26.633604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:51.966 "name": "raid_bdev1", 00:26:51.966 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:51.966 "strip_size_kb": 64, 00:26:51.966 "state": "online", 00:26:51.966 "raid_level": "raid5f", 00:26:51.966 "superblock": false, 00:26:51.966 "num_base_bdevs": 4, 00:26:51.966 "num_base_bdevs_discovered": 4, 00:26:51.966 "num_base_bdevs_operational": 4, 00:26:51.966 "base_bdevs_list": [ 00:26:51.966 { 00:26:51.966 "name": "spare", 00:26:51.966 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:51.966 "is_configured": true, 00:26:51.966 "data_offset": 0, 00:26:51.966 "data_size": 65536 00:26:51.966 }, 00:26:51.966 { 00:26:51.966 "name": "BaseBdev2", 00:26:51.966 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:51.966 "is_configured": true, 00:26:51.966 "data_offset": 0, 00:26:51.966 "data_size": 65536 00:26:51.966 }, 00:26:51.966 { 00:26:51.966 "name": "BaseBdev3", 00:26:51.966 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:51.966 "is_configured": true, 00:26:51.966 "data_offset": 0, 00:26:51.966 "data_size": 65536 00:26:51.966 }, 00:26:51.966 { 00:26:51.966 "name": "BaseBdev4", 00:26:51.966 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:51.966 "is_configured": true, 00:26:51.966 "data_offset": 0, 00:26:51.966 "data_size": 65536 00:26:51.966 } 00:26:51.966 ] 00:26:51.966 }' 00:26:51.966 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:52.229 "name": "raid_bdev1", 00:26:52.229 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:52.229 "strip_size_kb": 64, 00:26:52.229 "state": "online", 00:26:52.229 "raid_level": "raid5f", 00:26:52.229 "superblock": false, 00:26:52.229 "num_base_bdevs": 4, 00:26:52.229 "num_base_bdevs_discovered": 4, 00:26:52.229 "num_base_bdevs_operational": 4, 00:26:52.229 "base_bdevs_list": [ 00:26:52.229 { 00:26:52.229 "name": "spare", 00:26:52.229 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 }, 00:26:52.229 { 00:26:52.229 "name": "BaseBdev2", 00:26:52.229 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 }, 00:26:52.229 { 00:26:52.229 "name": "BaseBdev3", 00:26:52.229 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 }, 00:26:52.229 { 00:26:52.229 "name": "BaseBdev4", 00:26:52.229 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 } 00:26:52.229 ] 00:26:52.229 }' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.229 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.229 "name": "raid_bdev1", 00:26:52.229 "uuid": "2d811617-c73c-4444-abfb-1851248a60e4", 00:26:52.229 "strip_size_kb": 64, 00:26:52.229 "state": "online", 00:26:52.229 "raid_level": "raid5f", 00:26:52.229 "superblock": false, 00:26:52.229 "num_base_bdevs": 4, 00:26:52.229 "num_base_bdevs_discovered": 4, 00:26:52.229 "num_base_bdevs_operational": 4, 00:26:52.229 "base_bdevs_list": [ 00:26:52.229 { 00:26:52.229 "name": "spare", 00:26:52.229 "uuid": "ed15622d-1ad1-54f2-959a-a715ea21343b", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 }, 00:26:52.229 { 00:26:52.229 "name": "BaseBdev2", 00:26:52.229 "uuid": "04d7454d-df1c-5790-8b4d-1fc813aefedc", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 }, 00:26:52.229 { 00:26:52.229 "name": "BaseBdev3", 00:26:52.229 "uuid": "f92180e2-92c6-57de-b832-63c5b4cdb8ce", 00:26:52.229 "is_configured": true, 00:26:52.229 "data_offset": 0, 00:26:52.229 "data_size": 65536 00:26:52.229 }, 00:26:52.229 { 00:26:52.229 "name": "BaseBdev4", 00:26:52.230 "uuid": "147713eb-0e57-5f11-b430-3ca30f51c7b3", 00:26:52.230 "is_configured": true, 00:26:52.230 "data_offset": 0, 00:26:52.230 "data_size": 65536 00:26:52.230 } 00:26:52.230 ] 00:26:52.230 }' 00:26:52.230 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.230 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.491 [2024-12-09 23:08:27.789917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:52.491 [2024-12-09 23:08:27.789943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.491 [2024-12-09 23:08:27.790006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.491 [2024-12-09 23:08:27.790084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.491 [2024-12-09 23:08:27.790093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.491 23:08:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:52.759 /dev/nbd0 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:52.759 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.760 1+0 records in 00:26:52.760 1+0 records out 00:26:52.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401952 s, 10.2 MB/s 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.760 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.761 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:53.022 /dev/nbd1 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:53.022 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:53.023 1+0 records in 00:26:53.023 1+0 records out 00:26:53.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230509 s, 17.8 MB/s 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:53.023 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.284 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82201 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82201 ']' 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82201 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82201 00:26:53.547 killing process with pid 82201 00:26:53.547 Received shutdown signal, test time was about 60.000000 seconds 00:26:53.547 00:26:53.547 Latency(us) 00:26:53.547 [2024-12-09T23:08:28.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.547 [2024-12-09T23:08:28.910Z] =================================================================================================================== 00:26:53.547 [2024-12-09T23:08:28.910Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82201' 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82201 00:26:53.547 [2024-12-09 23:08:28.862592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:53.547 23:08:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82201 00:26:53.808 [2024-12-09 23:08:29.111352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:54.381 23:08:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:26:54.381 00:26:54.381 real 0m17.773s 00:26:54.382 user 0m20.769s 00:26:54.382 sys 0m1.730s 00:26:54.382 23:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.382 23:08:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.382 ************************************ 00:26:54.382 END TEST raid5f_rebuild_test 00:26:54.382 ************************************ 00:26:54.382 23:08:29 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:26:54.382 23:08:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:54.382 23:08:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.382 23:08:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:54.643 ************************************ 00:26:54.643 START TEST raid5f_rebuild_test_sb 00:26:54.643 ************************************ 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:54.643 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:54.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82695 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82695 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82695 ']' 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.644 23:08:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:54.644 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:54.644 Zero copy mechanism will not be used. 00:26:54.644 [2024-12-09 23:08:29.854078] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:26:54.644 [2024-12-09 23:08:29.854274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82695 ] 00:26:54.908 [2024-12-09 23:08:30.031354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.908 [2024-12-09 23:08:30.136085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.173 [2024-12-09 23:08:30.273733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:55.173 [2024-12-09 23:08:30.273773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 BaseBdev1_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 [2024-12-09 23:08:30.692693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:55.442 [2024-12-09 23:08:30.692753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.442 [2024-12-09 23:08:30.692774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:55.442 [2024-12-09 23:08:30.692786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.442 [2024-12-09 23:08:30.694947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.442 [2024-12-09 23:08:30.694985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:55.442 BaseBdev1 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 BaseBdev2_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 [2024-12-09 23:08:30.728868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:55.442 [2024-12-09 23:08:30.728921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.442 [2024-12-09 23:08:30.728941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:55.442 [2024-12-09 23:08:30.728953] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.442 [2024-12-09 23:08:30.731084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.442 [2024-12-09 23:08:30.731132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:55.442 BaseBdev2 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 BaseBdev3_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.442 [2024-12-09 23:08:30.780139] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:55.442 [2024-12-09 23:08:30.780189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.442 [2024-12-09 23:08:30.780209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:55.442 [2024-12-09 23:08:30.780220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.442 [2024-12-09 23:08:30.782352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.442 [2024-12-09 23:08:30.782388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:55.442 BaseBdev3 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.442 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.704 BaseBdev4_malloc 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.704 [2024-12-09 23:08:30.816138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:55.704 [2024-12-09 23:08:30.816185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.704 [2024-12-09 23:08:30.816201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:55.704 [2024-12-09 23:08:30.816212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.704 [2024-12-09 23:08:30.818305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.704 [2024-12-09 23:08:30.818341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:55.704 BaseBdev4 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.704 spare_malloc 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.704 spare_delay 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.704 [2024-12-09 23:08:30.860260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:55.704 [2024-12-09 23:08:30.860306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.704 [2024-12-09 23:08:30.860323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:55.704 [2024-12-09 23:08:30.860334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.704 [2024-12-09 23:08:30.862466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.704 [2024-12-09 23:08:30.862503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:55.704 spare 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.704 [2024-12-09 23:08:30.868321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:55.704 [2024-12-09 23:08:30.870167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:55.704 [2024-12-09 23:08:30.870227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:55.704 [2024-12-09 23:08:30.870278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:55.704 [2024-12-09 23:08:30.870460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:55.704 [2024-12-09 23:08:30.870472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:55.704 [2024-12-09 23:08:30.870717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:55.704 [2024-12-09 23:08:30.875645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:55.704 [2024-12-09 23:08:30.875666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:55.704 [2024-12-09 23:08:30.875836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:55.704 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.705 "name": "raid_bdev1", 00:26:55.705 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:26:55.705 "strip_size_kb": 64, 00:26:55.705 "state": "online", 00:26:55.705 "raid_level": "raid5f", 00:26:55.705 "superblock": true, 00:26:55.705 "num_base_bdevs": 4, 00:26:55.705 "num_base_bdevs_discovered": 4, 00:26:55.705 "num_base_bdevs_operational": 4, 00:26:55.705 "base_bdevs_list": [ 00:26:55.705 { 00:26:55.705 "name": "BaseBdev1", 00:26:55.705 "uuid": "1d09da42-b77d-5f2d-a163-950a566153ca", 00:26:55.705 "is_configured": true, 00:26:55.705 "data_offset": 2048, 00:26:55.705 "data_size": 63488 00:26:55.705 }, 00:26:55.705 { 00:26:55.705 "name": "BaseBdev2", 00:26:55.705 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:26:55.705 "is_configured": true, 00:26:55.705 "data_offset": 2048, 00:26:55.705 "data_size": 63488 00:26:55.705 }, 00:26:55.705 { 00:26:55.705 "name": "BaseBdev3", 00:26:55.705 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:26:55.705 "is_configured": true, 00:26:55.705 "data_offset": 2048, 00:26:55.705 "data_size": 63488 00:26:55.705 }, 00:26:55.705 { 00:26:55.705 "name": "BaseBdev4", 00:26:55.705 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:26:55.705 "is_configured": true, 00:26:55.705 "data_offset": 2048, 00:26:55.705 "data_size": 63488 00:26:55.705 } 00:26:55.705 ] 00:26:55.705 }' 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.705 23:08:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:55.965 [2024-12-09 23:08:31.193900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:55.965 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:56.225 [2024-12-09 23:08:31.429788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:56.225 /dev/nbd0 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.225 1+0 records in 00:26:56.225 1+0 records out 00:26:56.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288784 s, 14.2 MB/s 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:26:56.225 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:56.796 496+0 records in 00:26:56.796 496+0 records out 00:26:56.796 97517568 bytes (98 MB, 93 MiB) copied, 0.482611 s, 202 MB/s 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.796 23:08:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 [2024-12-09 23:08:32.192891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.085 [2024-12-09 23:08:32.202746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.085 "name": "raid_bdev1", 00:26:57.085 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:26:57.085 "strip_size_kb": 64, 00:26:57.085 "state": "online", 00:26:57.085 "raid_level": "raid5f", 00:26:57.085 "superblock": true, 00:26:57.085 "num_base_bdevs": 4, 00:26:57.085 "num_base_bdevs_discovered": 3, 00:26:57.085 "num_base_bdevs_operational": 3, 00:26:57.085 "base_bdevs_list": [ 00:26:57.085 { 00:26:57.085 "name": null, 00:26:57.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.085 "is_configured": false, 00:26:57.085 "data_offset": 0, 00:26:57.085 "data_size": 63488 00:26:57.085 }, 00:26:57.085 { 00:26:57.085 "name": "BaseBdev2", 00:26:57.085 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:26:57.085 "is_configured": true, 00:26:57.085 "data_offset": 2048, 00:26:57.085 "data_size": 63488 00:26:57.085 }, 00:26:57.085 { 00:26:57.085 "name": "BaseBdev3", 00:26:57.085 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:26:57.085 "is_configured": true, 00:26:57.085 "data_offset": 2048, 00:26:57.085 "data_size": 63488 00:26:57.085 }, 00:26:57.085 { 00:26:57.085 "name": "BaseBdev4", 00:26:57.085 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:26:57.085 "is_configured": true, 00:26:57.085 "data_offset": 2048, 00:26:57.085 "data_size": 63488 00:26:57.085 } 00:26:57.085 ] 00:26:57.085 }' 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.085 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.379 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:57.379 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.379 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.379 [2024-12-09 23:08:32.538829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:57.379 [2024-12-09 23:08:32.549002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:26:57.379 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.379 23:08:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:57.379 [2024-12-09 23:08:32.555824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:58.318 "name": "raid_bdev1", 00:26:58.318 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:26:58.318 "strip_size_kb": 64, 00:26:58.318 "state": "online", 00:26:58.318 "raid_level": "raid5f", 00:26:58.318 "superblock": true, 00:26:58.318 "num_base_bdevs": 4, 00:26:58.318 "num_base_bdevs_discovered": 4, 00:26:58.318 "num_base_bdevs_operational": 4, 00:26:58.318 "process": { 00:26:58.318 "type": "rebuild", 00:26:58.318 "target": "spare", 00:26:58.318 "progress": { 00:26:58.318 "blocks": 17280, 00:26:58.318 "percent": 9 00:26:58.318 } 00:26:58.318 }, 00:26:58.318 "base_bdevs_list": [ 00:26:58.318 { 00:26:58.318 "name": "spare", 00:26:58.318 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:26:58.318 "is_configured": true, 00:26:58.318 "data_offset": 2048, 00:26:58.318 "data_size": 63488 00:26:58.318 }, 00:26:58.318 { 00:26:58.318 "name": "BaseBdev2", 00:26:58.318 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:26:58.318 "is_configured": true, 00:26:58.318 "data_offset": 2048, 00:26:58.318 "data_size": 63488 00:26:58.318 }, 00:26:58.318 { 00:26:58.318 "name": "BaseBdev3", 00:26:58.318 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:26:58.318 "is_configured": true, 00:26:58.318 "data_offset": 2048, 00:26:58.318 "data_size": 63488 00:26:58.318 }, 00:26:58.318 { 00:26:58.318 "name": "BaseBdev4", 00:26:58.318 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:26:58.318 "is_configured": true, 00:26:58.318 "data_offset": 2048, 00:26:58.318 "data_size": 63488 00:26:58.318 } 00:26:58.318 ] 00:26:58.318 }' 00:26:58.318 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:58.319 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.319 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:58.319 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.319 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:58.319 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.319 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.319 [2024-12-09 23:08:33.645121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:58.319 [2024-12-09 23:08:33.665292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:58.319 [2024-12-09 23:08:33.665362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:58.319 [2024-12-09 23:08:33.665379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:58.319 [2024-12-09 23:08:33.665389] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.580 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.581 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.581 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.581 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.581 "name": "raid_bdev1", 00:26:58.581 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:26:58.581 "strip_size_kb": 64, 00:26:58.581 "state": "online", 00:26:58.581 "raid_level": "raid5f", 00:26:58.581 "superblock": true, 00:26:58.581 "num_base_bdevs": 4, 00:26:58.581 "num_base_bdevs_discovered": 3, 00:26:58.581 "num_base_bdevs_operational": 3, 00:26:58.581 "base_bdevs_list": [ 00:26:58.581 { 00:26:58.581 "name": null, 00:26:58.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.581 "is_configured": false, 00:26:58.581 "data_offset": 0, 00:26:58.581 "data_size": 63488 00:26:58.581 }, 00:26:58.581 { 00:26:58.581 "name": "BaseBdev2", 00:26:58.581 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:26:58.581 "is_configured": true, 00:26:58.581 "data_offset": 2048, 00:26:58.581 "data_size": 63488 00:26:58.581 }, 00:26:58.581 { 00:26:58.581 "name": "BaseBdev3", 00:26:58.581 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:26:58.581 "is_configured": true, 00:26:58.581 "data_offset": 2048, 00:26:58.581 "data_size": 63488 00:26:58.581 }, 00:26:58.581 { 00:26:58.581 "name": "BaseBdev4", 00:26:58.581 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:26:58.581 "is_configured": true, 00:26:58.581 "data_offset": 2048, 00:26:58.581 "data_size": 63488 00:26:58.581 } 00:26:58.581 ] 00:26:58.581 }' 00:26:58.581 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.581 23:08:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:58.842 "name": "raid_bdev1", 00:26:58.842 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:26:58.842 "strip_size_kb": 64, 00:26:58.842 "state": "online", 00:26:58.842 "raid_level": "raid5f", 00:26:58.842 "superblock": true, 00:26:58.842 "num_base_bdevs": 4, 00:26:58.842 "num_base_bdevs_discovered": 3, 00:26:58.842 "num_base_bdevs_operational": 3, 00:26:58.842 "base_bdevs_list": [ 00:26:58.842 { 00:26:58.842 "name": null, 00:26:58.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.842 "is_configured": false, 00:26:58.842 "data_offset": 0, 00:26:58.842 "data_size": 63488 00:26:58.842 }, 00:26:58.842 { 00:26:58.842 "name": "BaseBdev2", 00:26:58.842 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:26:58.842 "is_configured": true, 00:26:58.842 "data_offset": 2048, 00:26:58.842 "data_size": 63488 00:26:58.842 }, 00:26:58.842 { 00:26:58.842 "name": "BaseBdev3", 00:26:58.842 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:26:58.842 "is_configured": true, 00:26:58.842 "data_offset": 2048, 00:26:58.842 "data_size": 63488 00:26:58.842 }, 00:26:58.842 { 00:26:58.842 "name": "BaseBdev4", 00:26:58.842 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:26:58.842 "is_configured": true, 00:26:58.842 "data_offset": 2048, 00:26:58.842 "data_size": 63488 00:26:58.842 } 00:26:58.842 ] 00:26:58.842 }' 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.842 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.842 [2024-12-09 23:08:34.165598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:58.842 [2024-12-09 23:08:34.175122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:26:58.843 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.843 23:08:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:58.843 [2024-12-09 23:08:34.181785] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:59.868 "name": "raid_bdev1", 00:26:59.868 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:26:59.868 "strip_size_kb": 64, 00:26:59.868 "state": "online", 00:26:59.868 "raid_level": "raid5f", 00:26:59.868 "superblock": true, 00:26:59.868 "num_base_bdevs": 4, 00:26:59.868 "num_base_bdevs_discovered": 4, 00:26:59.868 "num_base_bdevs_operational": 4, 00:26:59.868 "process": { 00:26:59.868 "type": "rebuild", 00:26:59.868 "target": "spare", 00:26:59.868 "progress": { 00:26:59.868 "blocks": 17280, 00:26:59.868 "percent": 9 00:26:59.868 } 00:26:59.868 }, 00:26:59.868 "base_bdevs_list": [ 00:26:59.868 { 00:26:59.868 "name": "spare", 00:26:59.868 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:26:59.868 "is_configured": true, 00:26:59.868 "data_offset": 2048, 00:26:59.868 "data_size": 63488 00:26:59.868 }, 00:26:59.868 { 00:26:59.868 "name": "BaseBdev2", 00:26:59.868 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:26:59.868 "is_configured": true, 00:26:59.868 "data_offset": 2048, 00:26:59.868 "data_size": 63488 00:26:59.868 }, 00:26:59.868 { 00:26:59.868 "name": "BaseBdev3", 00:26:59.868 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:26:59.868 "is_configured": true, 00:26:59.868 "data_offset": 2048, 00:26:59.868 "data_size": 63488 00:26:59.868 }, 00:26:59.868 { 00:26:59.868 "name": "BaseBdev4", 00:26:59.868 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:26:59.868 "is_configured": true, 00:26:59.868 "data_offset": 2048, 00:26:59.868 "data_size": 63488 00:26:59.868 } 00:26:59.868 ] 00:26:59.868 }' 00:26:59.868 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:00.129 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.129 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:00.129 "name": "raid_bdev1", 00:27:00.129 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:00.129 "strip_size_kb": 64, 00:27:00.129 "state": "online", 00:27:00.129 "raid_level": "raid5f", 00:27:00.129 "superblock": true, 00:27:00.129 "num_base_bdevs": 4, 00:27:00.129 "num_base_bdevs_discovered": 4, 00:27:00.129 "num_base_bdevs_operational": 4, 00:27:00.129 "process": { 00:27:00.129 "type": "rebuild", 00:27:00.129 "target": "spare", 00:27:00.129 "progress": { 00:27:00.129 "blocks": 19200, 00:27:00.129 "percent": 10 00:27:00.129 } 00:27:00.129 }, 00:27:00.129 "base_bdevs_list": [ 00:27:00.129 { 00:27:00.129 "name": "spare", 00:27:00.129 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:00.129 "is_configured": true, 00:27:00.129 "data_offset": 2048, 00:27:00.129 "data_size": 63488 00:27:00.129 }, 00:27:00.129 { 00:27:00.129 "name": "BaseBdev2", 00:27:00.129 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:00.129 "is_configured": true, 00:27:00.129 "data_offset": 2048, 00:27:00.129 "data_size": 63488 00:27:00.129 }, 00:27:00.129 { 00:27:00.129 "name": "BaseBdev3", 00:27:00.129 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:00.130 "is_configured": true, 00:27:00.130 "data_offset": 2048, 00:27:00.130 "data_size": 63488 00:27:00.130 }, 00:27:00.130 { 00:27:00.130 "name": "BaseBdev4", 00:27:00.130 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:00.130 "is_configured": true, 00:27:00.130 "data_offset": 2048, 00:27:00.130 "data_size": 63488 00:27:00.130 } 00:27:00.130 ] 00:27:00.130 }' 00:27:00.130 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:00.130 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:00.130 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:00.130 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:00.130 23:08:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:01.073 "name": "raid_bdev1", 00:27:01.073 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:01.073 "strip_size_kb": 64, 00:27:01.073 "state": "online", 00:27:01.073 "raid_level": "raid5f", 00:27:01.073 "superblock": true, 00:27:01.073 "num_base_bdevs": 4, 00:27:01.073 "num_base_bdevs_discovered": 4, 00:27:01.073 "num_base_bdevs_operational": 4, 00:27:01.073 "process": { 00:27:01.073 "type": "rebuild", 00:27:01.073 "target": "spare", 00:27:01.073 "progress": { 00:27:01.073 "blocks": 40320, 00:27:01.073 "percent": 21 00:27:01.073 } 00:27:01.073 }, 00:27:01.073 "base_bdevs_list": [ 00:27:01.073 { 00:27:01.073 "name": "spare", 00:27:01.073 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:01.073 "is_configured": true, 00:27:01.073 "data_offset": 2048, 00:27:01.073 "data_size": 63488 00:27:01.073 }, 00:27:01.073 { 00:27:01.073 "name": "BaseBdev2", 00:27:01.073 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:01.073 "is_configured": true, 00:27:01.073 "data_offset": 2048, 00:27:01.073 "data_size": 63488 00:27:01.073 }, 00:27:01.073 { 00:27:01.073 "name": "BaseBdev3", 00:27:01.073 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:01.073 "is_configured": true, 00:27:01.073 "data_offset": 2048, 00:27:01.073 "data_size": 63488 00:27:01.073 }, 00:27:01.073 { 00:27:01.073 "name": "BaseBdev4", 00:27:01.073 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:01.073 "is_configured": true, 00:27:01.073 "data_offset": 2048, 00:27:01.073 "data_size": 63488 00:27:01.073 } 00:27:01.073 ] 00:27:01.073 }' 00:27:01.073 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:01.334 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:01.334 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:01.334 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:01.334 23:08:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.286 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:02.286 "name": "raid_bdev1", 00:27:02.286 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:02.286 "strip_size_kb": 64, 00:27:02.286 "state": "online", 00:27:02.286 "raid_level": "raid5f", 00:27:02.286 "superblock": true, 00:27:02.286 "num_base_bdevs": 4, 00:27:02.286 "num_base_bdevs_discovered": 4, 00:27:02.286 "num_base_bdevs_operational": 4, 00:27:02.286 "process": { 00:27:02.286 "type": "rebuild", 00:27:02.286 "target": "spare", 00:27:02.286 "progress": { 00:27:02.286 "blocks": 61440, 00:27:02.286 "percent": 32 00:27:02.286 } 00:27:02.286 }, 00:27:02.286 "base_bdevs_list": [ 00:27:02.286 { 00:27:02.286 "name": "spare", 00:27:02.287 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:02.287 "is_configured": true, 00:27:02.287 "data_offset": 2048, 00:27:02.287 "data_size": 63488 00:27:02.287 }, 00:27:02.287 { 00:27:02.287 "name": "BaseBdev2", 00:27:02.287 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:02.287 "is_configured": true, 00:27:02.287 "data_offset": 2048, 00:27:02.287 "data_size": 63488 00:27:02.287 }, 00:27:02.287 { 00:27:02.287 "name": "BaseBdev3", 00:27:02.287 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:02.287 "is_configured": true, 00:27:02.287 "data_offset": 2048, 00:27:02.287 "data_size": 63488 00:27:02.287 }, 00:27:02.287 { 00:27:02.287 "name": "BaseBdev4", 00:27:02.287 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:02.287 "is_configured": true, 00:27:02.287 "data_offset": 2048, 00:27:02.287 "data_size": 63488 00:27:02.287 } 00:27:02.287 ] 00:27:02.287 }' 00:27:02.287 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:02.287 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:02.287 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:02.287 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:02.287 23:08:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.230 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.490 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.490 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:03.490 "name": "raid_bdev1", 00:27:03.490 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:03.490 "strip_size_kb": 64, 00:27:03.490 "state": "online", 00:27:03.490 "raid_level": "raid5f", 00:27:03.490 "superblock": true, 00:27:03.490 "num_base_bdevs": 4, 00:27:03.490 "num_base_bdevs_discovered": 4, 00:27:03.490 "num_base_bdevs_operational": 4, 00:27:03.490 "process": { 00:27:03.490 "type": "rebuild", 00:27:03.490 "target": "spare", 00:27:03.490 "progress": { 00:27:03.491 "blocks": 82560, 00:27:03.491 "percent": 43 00:27:03.491 } 00:27:03.491 }, 00:27:03.491 "base_bdevs_list": [ 00:27:03.491 { 00:27:03.491 "name": "spare", 00:27:03.491 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:03.491 "is_configured": true, 00:27:03.491 "data_offset": 2048, 00:27:03.491 "data_size": 63488 00:27:03.491 }, 00:27:03.491 { 00:27:03.491 "name": "BaseBdev2", 00:27:03.491 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:03.491 "is_configured": true, 00:27:03.491 "data_offset": 2048, 00:27:03.491 "data_size": 63488 00:27:03.491 }, 00:27:03.491 { 00:27:03.491 "name": "BaseBdev3", 00:27:03.491 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:03.491 "is_configured": true, 00:27:03.491 "data_offset": 2048, 00:27:03.491 "data_size": 63488 00:27:03.491 }, 00:27:03.491 { 00:27:03.491 "name": "BaseBdev4", 00:27:03.491 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:03.491 "is_configured": true, 00:27:03.491 "data_offset": 2048, 00:27:03.491 "data_size": 63488 00:27:03.491 } 00:27:03.491 ] 00:27:03.491 }' 00:27:03.491 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:03.491 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:03.491 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:03.491 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:03.491 23:08:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:04.448 "name": "raid_bdev1", 00:27:04.448 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:04.448 "strip_size_kb": 64, 00:27:04.448 "state": "online", 00:27:04.448 "raid_level": "raid5f", 00:27:04.448 "superblock": true, 00:27:04.448 "num_base_bdevs": 4, 00:27:04.448 "num_base_bdevs_discovered": 4, 00:27:04.448 "num_base_bdevs_operational": 4, 00:27:04.448 "process": { 00:27:04.448 "type": "rebuild", 00:27:04.448 "target": "spare", 00:27:04.448 "progress": { 00:27:04.448 "blocks": 103680, 00:27:04.448 "percent": 54 00:27:04.448 } 00:27:04.448 }, 00:27:04.448 "base_bdevs_list": [ 00:27:04.448 { 00:27:04.448 "name": "spare", 00:27:04.448 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:04.448 "is_configured": true, 00:27:04.448 "data_offset": 2048, 00:27:04.448 "data_size": 63488 00:27:04.448 }, 00:27:04.448 { 00:27:04.448 "name": "BaseBdev2", 00:27:04.448 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:04.448 "is_configured": true, 00:27:04.448 "data_offset": 2048, 00:27:04.448 "data_size": 63488 00:27:04.448 }, 00:27:04.448 { 00:27:04.448 "name": "BaseBdev3", 00:27:04.448 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:04.448 "is_configured": true, 00:27:04.448 "data_offset": 2048, 00:27:04.448 "data_size": 63488 00:27:04.448 }, 00:27:04.448 { 00:27:04.448 "name": "BaseBdev4", 00:27:04.448 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:04.448 "is_configured": true, 00:27:04.448 "data_offset": 2048, 00:27:04.448 "data_size": 63488 00:27:04.448 } 00:27:04.448 ] 00:27:04.448 }' 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:04.448 23:08:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:05.832 "name": "raid_bdev1", 00:27:05.832 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:05.832 "strip_size_kb": 64, 00:27:05.832 "state": "online", 00:27:05.832 "raid_level": "raid5f", 00:27:05.832 "superblock": true, 00:27:05.832 "num_base_bdevs": 4, 00:27:05.832 "num_base_bdevs_discovered": 4, 00:27:05.832 "num_base_bdevs_operational": 4, 00:27:05.832 "process": { 00:27:05.832 "type": "rebuild", 00:27:05.832 "target": "spare", 00:27:05.832 "progress": { 00:27:05.832 "blocks": 124800, 00:27:05.832 "percent": 65 00:27:05.832 } 00:27:05.832 }, 00:27:05.832 "base_bdevs_list": [ 00:27:05.832 { 00:27:05.832 "name": "spare", 00:27:05.832 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:05.832 "is_configured": true, 00:27:05.832 "data_offset": 2048, 00:27:05.832 "data_size": 63488 00:27:05.832 }, 00:27:05.832 { 00:27:05.832 "name": "BaseBdev2", 00:27:05.832 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:05.832 "is_configured": true, 00:27:05.832 "data_offset": 2048, 00:27:05.832 "data_size": 63488 00:27:05.832 }, 00:27:05.832 { 00:27:05.832 "name": "BaseBdev3", 00:27:05.832 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:05.832 "is_configured": true, 00:27:05.832 "data_offset": 2048, 00:27:05.832 "data_size": 63488 00:27:05.832 }, 00:27:05.832 { 00:27:05.832 "name": "BaseBdev4", 00:27:05.832 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:05.832 "is_configured": true, 00:27:05.832 "data_offset": 2048, 00:27:05.832 "data_size": 63488 00:27:05.832 } 00:27:05.832 ] 00:27:05.832 }' 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:05.832 23:08:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:06.774 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:06.774 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:06.774 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:06.774 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:06.774 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:06.774 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:06.775 "name": "raid_bdev1", 00:27:06.775 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:06.775 "strip_size_kb": 64, 00:27:06.775 "state": "online", 00:27:06.775 "raid_level": "raid5f", 00:27:06.775 "superblock": true, 00:27:06.775 "num_base_bdevs": 4, 00:27:06.775 "num_base_bdevs_discovered": 4, 00:27:06.775 "num_base_bdevs_operational": 4, 00:27:06.775 "process": { 00:27:06.775 "type": "rebuild", 00:27:06.775 "target": "spare", 00:27:06.775 "progress": { 00:27:06.775 "blocks": 145920, 00:27:06.775 "percent": 76 00:27:06.775 } 00:27:06.775 }, 00:27:06.775 "base_bdevs_list": [ 00:27:06.775 { 00:27:06.775 "name": "spare", 00:27:06.775 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:06.775 "is_configured": true, 00:27:06.775 "data_offset": 2048, 00:27:06.775 "data_size": 63488 00:27:06.775 }, 00:27:06.775 { 00:27:06.775 "name": "BaseBdev2", 00:27:06.775 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:06.775 "is_configured": true, 00:27:06.775 "data_offset": 2048, 00:27:06.775 "data_size": 63488 00:27:06.775 }, 00:27:06.775 { 00:27:06.775 "name": "BaseBdev3", 00:27:06.775 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:06.775 "is_configured": true, 00:27:06.775 "data_offset": 2048, 00:27:06.775 "data_size": 63488 00:27:06.775 }, 00:27:06.775 { 00:27:06.775 "name": "BaseBdev4", 00:27:06.775 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:06.775 "is_configured": true, 00:27:06.775 "data_offset": 2048, 00:27:06.775 "data_size": 63488 00:27:06.775 } 00:27:06.775 ] 00:27:06.775 }' 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:06.775 23:08:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.716 23:08:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.716 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.716 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:07.716 "name": "raid_bdev1", 00:27:07.716 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:07.716 "strip_size_kb": 64, 00:27:07.716 "state": "online", 00:27:07.716 "raid_level": "raid5f", 00:27:07.716 "superblock": true, 00:27:07.716 "num_base_bdevs": 4, 00:27:07.716 "num_base_bdevs_discovered": 4, 00:27:07.716 "num_base_bdevs_operational": 4, 00:27:07.716 "process": { 00:27:07.716 "type": "rebuild", 00:27:07.716 "target": "spare", 00:27:07.716 "progress": { 00:27:07.716 "blocks": 167040, 00:27:07.716 "percent": 87 00:27:07.716 } 00:27:07.716 }, 00:27:07.716 "base_bdevs_list": [ 00:27:07.716 { 00:27:07.716 "name": "spare", 00:27:07.716 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:07.716 "is_configured": true, 00:27:07.716 "data_offset": 2048, 00:27:07.716 "data_size": 63488 00:27:07.716 }, 00:27:07.716 { 00:27:07.716 "name": "BaseBdev2", 00:27:07.716 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:07.716 "is_configured": true, 00:27:07.716 "data_offset": 2048, 00:27:07.716 "data_size": 63488 00:27:07.716 }, 00:27:07.716 { 00:27:07.716 "name": "BaseBdev3", 00:27:07.716 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:07.716 "is_configured": true, 00:27:07.716 "data_offset": 2048, 00:27:07.716 "data_size": 63488 00:27:07.716 }, 00:27:07.716 { 00:27:07.716 "name": "BaseBdev4", 00:27:07.716 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:07.716 "is_configured": true, 00:27:07.716 "data_offset": 2048, 00:27:07.716 "data_size": 63488 00:27:07.716 } 00:27:07.716 ] 00:27:07.716 }' 00:27:07.716 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:07.716 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.716 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:07.976 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.976 23:08:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:08.917 "name": "raid_bdev1", 00:27:08.917 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:08.917 "strip_size_kb": 64, 00:27:08.917 "state": "online", 00:27:08.917 "raid_level": "raid5f", 00:27:08.917 "superblock": true, 00:27:08.917 "num_base_bdevs": 4, 00:27:08.917 "num_base_bdevs_discovered": 4, 00:27:08.917 "num_base_bdevs_operational": 4, 00:27:08.917 "process": { 00:27:08.917 "type": "rebuild", 00:27:08.917 "target": "spare", 00:27:08.917 "progress": { 00:27:08.917 "blocks": 188160, 00:27:08.917 "percent": 98 00:27:08.917 } 00:27:08.917 }, 00:27:08.917 "base_bdevs_list": [ 00:27:08.917 { 00:27:08.917 "name": "spare", 00:27:08.917 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:08.917 "is_configured": true, 00:27:08.917 "data_offset": 2048, 00:27:08.917 "data_size": 63488 00:27:08.917 }, 00:27:08.917 { 00:27:08.917 "name": "BaseBdev2", 00:27:08.917 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:08.917 "is_configured": true, 00:27:08.917 "data_offset": 2048, 00:27:08.917 "data_size": 63488 00:27:08.917 }, 00:27:08.917 { 00:27:08.917 "name": "BaseBdev3", 00:27:08.917 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:08.917 "is_configured": true, 00:27:08.917 "data_offset": 2048, 00:27:08.917 "data_size": 63488 00:27:08.917 }, 00:27:08.917 { 00:27:08.917 "name": "BaseBdev4", 00:27:08.917 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:08.917 "is_configured": true, 00:27:08.917 "data_offset": 2048, 00:27:08.917 "data_size": 63488 00:27:08.917 } 00:27:08.917 ] 00:27:08.917 }' 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:08.917 23:08:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:08.917 [2024-12-09 23:08:44.251252] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:08.917 [2024-12-09 23:08:44.251316] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:08.917 [2024-12-09 23:08:44.251438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.861 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:10.123 "name": "raid_bdev1", 00:27:10.123 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:10.123 "strip_size_kb": 64, 00:27:10.123 "state": "online", 00:27:10.123 "raid_level": "raid5f", 00:27:10.123 "superblock": true, 00:27:10.123 "num_base_bdevs": 4, 00:27:10.123 "num_base_bdevs_discovered": 4, 00:27:10.123 "num_base_bdevs_operational": 4, 00:27:10.123 "base_bdevs_list": [ 00:27:10.123 { 00:27:10.123 "name": "spare", 00:27:10.123 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev2", 00:27:10.123 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev3", 00:27:10.123 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev4", 00:27:10.123 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 } 00:27:10.123 ] 00:27:10.123 }' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:10.123 "name": "raid_bdev1", 00:27:10.123 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:10.123 "strip_size_kb": 64, 00:27:10.123 "state": "online", 00:27:10.123 "raid_level": "raid5f", 00:27:10.123 "superblock": true, 00:27:10.123 "num_base_bdevs": 4, 00:27:10.123 "num_base_bdevs_discovered": 4, 00:27:10.123 "num_base_bdevs_operational": 4, 00:27:10.123 "base_bdevs_list": [ 00:27:10.123 { 00:27:10.123 "name": "spare", 00:27:10.123 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev2", 00:27:10.123 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev3", 00:27:10.123 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev4", 00:27:10.123 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 } 00:27:10.123 ] 00:27:10.123 }' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.123 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.123 "name": "raid_bdev1", 00:27:10.123 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:10.123 "strip_size_kb": 64, 00:27:10.123 "state": "online", 00:27:10.123 "raid_level": "raid5f", 00:27:10.123 "superblock": true, 00:27:10.123 "num_base_bdevs": 4, 00:27:10.123 "num_base_bdevs_discovered": 4, 00:27:10.123 "num_base_bdevs_operational": 4, 00:27:10.123 "base_bdevs_list": [ 00:27:10.123 { 00:27:10.123 "name": "spare", 00:27:10.123 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.123 }, 00:27:10.123 { 00:27:10.123 "name": "BaseBdev2", 00:27:10.123 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:10.123 "is_configured": true, 00:27:10.123 "data_offset": 2048, 00:27:10.123 "data_size": 63488 00:27:10.124 }, 00:27:10.124 { 00:27:10.124 "name": "BaseBdev3", 00:27:10.124 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:10.124 "is_configured": true, 00:27:10.124 "data_offset": 2048, 00:27:10.124 "data_size": 63488 00:27:10.124 }, 00:27:10.124 { 00:27:10.124 "name": "BaseBdev4", 00:27:10.124 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:10.124 "is_configured": true, 00:27:10.124 "data_offset": 2048, 00:27:10.124 "data_size": 63488 00:27:10.124 } 00:27:10.124 ] 00:27:10.124 }' 00:27:10.124 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.124 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.383 [2024-12-09 23:08:45.708017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:10.383 [2024-12-09 23:08:45.708151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.383 [2024-12-09 23:08:45.708228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.383 [2024-12-09 23:08:45.708306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.383 [2024-12-09 23:08:45.708317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.383 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:10.644 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:10.645 /dev/nbd0 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:10.645 1+0 records in 00:27:10.645 1+0 records out 00:27:10.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303171 s, 13.5 MB/s 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:10.645 23:08:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:10.907 /dev/nbd1 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:10.907 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:10.907 1+0 records in 00:27:10.907 1+0 records out 00:27:10.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310108 s, 13.2 MB/s 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:10.908 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:11.170 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.430 [2024-12-09 23:08:46.778731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:11.430 [2024-12-09 23:08:46.778873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.430 [2024-12-09 23:08:46.778897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:27:11.430 [2024-12-09 23:08:46.778905] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.430 [2024-12-09 23:08:46.780779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.430 [2024-12-09 23:08:46.780811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:11.430 [2024-12-09 23:08:46.780889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:11.430 [2024-12-09 23:08:46.780928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:11.430 [2024-12-09 23:08:46.781033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:11.430 [2024-12-09 23:08:46.781114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:11.430 [2024-12-09 23:08:46.781178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:11.430 spare 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.430 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.692 [2024-12-09 23:08:46.881257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:11.692 [2024-12-09 23:08:46.881461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:11.692 [2024-12-09 23:08:46.881735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:27:11.692 [2024-12-09 23:08:46.885387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:11.692 [2024-12-09 23:08:46.885404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:11.692 [2024-12-09 23:08:46.885564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.692 "name": "raid_bdev1", 00:27:11.692 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:11.692 "strip_size_kb": 64, 00:27:11.692 "state": "online", 00:27:11.692 "raid_level": "raid5f", 00:27:11.692 "superblock": true, 00:27:11.692 "num_base_bdevs": 4, 00:27:11.692 "num_base_bdevs_discovered": 4, 00:27:11.692 "num_base_bdevs_operational": 4, 00:27:11.692 "base_bdevs_list": [ 00:27:11.692 { 00:27:11.692 "name": "spare", 00:27:11.692 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:11.692 "is_configured": true, 00:27:11.692 "data_offset": 2048, 00:27:11.692 "data_size": 63488 00:27:11.692 }, 00:27:11.692 { 00:27:11.692 "name": "BaseBdev2", 00:27:11.692 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:11.692 "is_configured": true, 00:27:11.692 "data_offset": 2048, 00:27:11.692 "data_size": 63488 00:27:11.692 }, 00:27:11.692 { 00:27:11.692 "name": "BaseBdev3", 00:27:11.692 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:11.692 "is_configured": true, 00:27:11.692 "data_offset": 2048, 00:27:11.692 "data_size": 63488 00:27:11.692 }, 00:27:11.692 { 00:27:11.692 "name": "BaseBdev4", 00:27:11.692 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:11.692 "is_configured": true, 00:27:11.692 "data_offset": 2048, 00:27:11.692 "data_size": 63488 00:27:11.692 } 00:27:11.692 ] 00:27:11.692 }' 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.692 23:08:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:11.952 "name": "raid_bdev1", 00:27:11.952 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:11.952 "strip_size_kb": 64, 00:27:11.952 "state": "online", 00:27:11.952 "raid_level": "raid5f", 00:27:11.952 "superblock": true, 00:27:11.952 "num_base_bdevs": 4, 00:27:11.952 "num_base_bdevs_discovered": 4, 00:27:11.952 "num_base_bdevs_operational": 4, 00:27:11.952 "base_bdevs_list": [ 00:27:11.952 { 00:27:11.952 "name": "spare", 00:27:11.952 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:11.952 "is_configured": true, 00:27:11.952 "data_offset": 2048, 00:27:11.952 "data_size": 63488 00:27:11.952 }, 00:27:11.952 { 00:27:11.952 "name": "BaseBdev2", 00:27:11.952 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:11.952 "is_configured": true, 00:27:11.952 "data_offset": 2048, 00:27:11.952 "data_size": 63488 00:27:11.952 }, 00:27:11.952 { 00:27:11.952 "name": "BaseBdev3", 00:27:11.952 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:11.952 "is_configured": true, 00:27:11.952 "data_offset": 2048, 00:27:11.952 "data_size": 63488 00:27:11.952 }, 00:27:11.952 { 00:27:11.952 "name": "BaseBdev4", 00:27:11.952 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:11.952 "is_configured": true, 00:27:11.952 "data_offset": 2048, 00:27:11.952 "data_size": 63488 00:27:11.952 } 00:27:11.952 ] 00:27:11.952 }' 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.952 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.213 [2024-12-09 23:08:47.317954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.213 "name": "raid_bdev1", 00:27:12.213 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:12.213 "strip_size_kb": 64, 00:27:12.213 "state": "online", 00:27:12.213 "raid_level": "raid5f", 00:27:12.213 "superblock": true, 00:27:12.213 "num_base_bdevs": 4, 00:27:12.213 "num_base_bdevs_discovered": 3, 00:27:12.213 "num_base_bdevs_operational": 3, 00:27:12.213 "base_bdevs_list": [ 00:27:12.213 { 00:27:12.213 "name": null, 00:27:12.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.213 "is_configured": false, 00:27:12.213 "data_offset": 0, 00:27:12.213 "data_size": 63488 00:27:12.213 }, 00:27:12.213 { 00:27:12.213 "name": "BaseBdev2", 00:27:12.213 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:12.213 "is_configured": true, 00:27:12.213 "data_offset": 2048, 00:27:12.213 "data_size": 63488 00:27:12.213 }, 00:27:12.213 { 00:27:12.213 "name": "BaseBdev3", 00:27:12.213 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:12.213 "is_configured": true, 00:27:12.213 "data_offset": 2048, 00:27:12.213 "data_size": 63488 00:27:12.213 }, 00:27:12.213 { 00:27:12.213 "name": "BaseBdev4", 00:27:12.213 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:12.213 "is_configured": true, 00:27:12.213 "data_offset": 2048, 00:27:12.213 "data_size": 63488 00:27:12.213 } 00:27:12.213 ] 00:27:12.213 }' 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.213 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.474 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:12.474 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.474 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.474 [2024-12-09 23:08:47.634043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:12.474 [2024-12-09 23:08:47.634289] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:12.474 [2024-12-09 23:08:47.634387] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:12.474 [2024-12-09 23:08:47.634460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:12.474 [2024-12-09 23:08:47.642258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:27:12.474 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.474 23:08:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:12.474 [2024-12-09 23:08:47.647730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:13.419 "name": "raid_bdev1", 00:27:13.419 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:13.419 "strip_size_kb": 64, 00:27:13.419 "state": "online", 00:27:13.419 "raid_level": "raid5f", 00:27:13.419 "superblock": true, 00:27:13.419 "num_base_bdevs": 4, 00:27:13.419 "num_base_bdevs_discovered": 4, 00:27:13.419 "num_base_bdevs_operational": 4, 00:27:13.419 "process": { 00:27:13.419 "type": "rebuild", 00:27:13.419 "target": "spare", 00:27:13.419 "progress": { 00:27:13.419 "blocks": 19200, 00:27:13.419 "percent": 10 00:27:13.419 } 00:27:13.419 }, 00:27:13.419 "base_bdevs_list": [ 00:27:13.419 { 00:27:13.419 "name": "spare", 00:27:13.419 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:13.419 "is_configured": true, 00:27:13.419 "data_offset": 2048, 00:27:13.419 "data_size": 63488 00:27:13.419 }, 00:27:13.419 { 00:27:13.419 "name": "BaseBdev2", 00:27:13.419 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:13.419 "is_configured": true, 00:27:13.419 "data_offset": 2048, 00:27:13.419 "data_size": 63488 00:27:13.419 }, 00:27:13.419 { 00:27:13.419 "name": "BaseBdev3", 00:27:13.419 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:13.419 "is_configured": true, 00:27:13.419 "data_offset": 2048, 00:27:13.419 "data_size": 63488 00:27:13.419 }, 00:27:13.419 { 00:27:13.419 "name": "BaseBdev4", 00:27:13.419 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:13.419 "is_configured": true, 00:27:13.419 "data_offset": 2048, 00:27:13.419 "data_size": 63488 00:27:13.419 } 00:27:13.419 ] 00:27:13.419 }' 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.419 [2024-12-09 23:08:48.744840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.419 [2024-12-09 23:08:48.755354] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:13.419 [2024-12-09 23:08:48.755404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.419 [2024-12-09 23:08:48.755418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.419 [2024-12-09 23:08:48.755425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.419 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.681 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.681 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:13.681 "name": "raid_bdev1", 00:27:13.681 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:13.681 "strip_size_kb": 64, 00:27:13.681 "state": "online", 00:27:13.681 "raid_level": "raid5f", 00:27:13.681 "superblock": true, 00:27:13.681 "num_base_bdevs": 4, 00:27:13.681 "num_base_bdevs_discovered": 3, 00:27:13.681 "num_base_bdevs_operational": 3, 00:27:13.681 "base_bdevs_list": [ 00:27:13.681 { 00:27:13.681 "name": null, 00:27:13.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.681 "is_configured": false, 00:27:13.681 "data_offset": 0, 00:27:13.681 "data_size": 63488 00:27:13.681 }, 00:27:13.681 { 00:27:13.681 "name": "BaseBdev2", 00:27:13.681 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:13.681 "is_configured": true, 00:27:13.681 "data_offset": 2048, 00:27:13.681 "data_size": 63488 00:27:13.681 }, 00:27:13.681 { 00:27:13.681 "name": "BaseBdev3", 00:27:13.681 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:13.681 "is_configured": true, 00:27:13.681 "data_offset": 2048, 00:27:13.681 "data_size": 63488 00:27:13.681 }, 00:27:13.681 { 00:27:13.681 "name": "BaseBdev4", 00:27:13.681 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:13.681 "is_configured": true, 00:27:13.681 "data_offset": 2048, 00:27:13.681 "data_size": 63488 00:27:13.681 } 00:27:13.681 ] 00:27:13.681 }' 00:27:13.681 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:13.681 23:08:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.942 23:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:13.942 23:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.942 23:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:13.942 [2024-12-09 23:08:49.103895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:13.942 [2024-12-09 23:08:49.103954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.942 [2024-12-09 23:08:49.103974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:13.942 [2024-12-09 23:08:49.103983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.942 [2024-12-09 23:08:49.104377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.942 [2024-12-09 23:08:49.104397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:13.942 [2024-12-09 23:08:49.104469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:13.942 [2024-12-09 23:08:49.104480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:13.942 [2024-12-09 23:08:49.104488] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:13.942 [2024-12-09 23:08:49.104508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:13.942 [2024-12-09 23:08:49.112222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:27:13.942 spare 00:27:13.942 23:08:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.942 23:08:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:13.942 [2024-12-09 23:08:49.117388] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:14.893 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:14.894 "name": "raid_bdev1", 00:27:14.894 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:14.894 "strip_size_kb": 64, 00:27:14.894 "state": "online", 00:27:14.894 "raid_level": "raid5f", 00:27:14.894 "superblock": true, 00:27:14.894 "num_base_bdevs": 4, 00:27:14.894 "num_base_bdevs_discovered": 4, 00:27:14.894 "num_base_bdevs_operational": 4, 00:27:14.894 "process": { 00:27:14.894 "type": "rebuild", 00:27:14.894 "target": "spare", 00:27:14.894 "progress": { 00:27:14.894 "blocks": 19200, 00:27:14.894 "percent": 10 00:27:14.894 } 00:27:14.894 }, 00:27:14.894 "base_bdevs_list": [ 00:27:14.894 { 00:27:14.894 "name": "spare", 00:27:14.894 "uuid": "9c2608e5-88a1-57b2-a8eb-2086911f3960", 00:27:14.894 "is_configured": true, 00:27:14.894 "data_offset": 2048, 00:27:14.894 "data_size": 63488 00:27:14.894 }, 00:27:14.894 { 00:27:14.894 "name": "BaseBdev2", 00:27:14.894 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:14.894 "is_configured": true, 00:27:14.894 "data_offset": 2048, 00:27:14.894 "data_size": 63488 00:27:14.894 }, 00:27:14.894 { 00:27:14.894 "name": "BaseBdev3", 00:27:14.894 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:14.894 "is_configured": true, 00:27:14.894 "data_offset": 2048, 00:27:14.894 "data_size": 63488 00:27:14.894 }, 00:27:14.894 { 00:27:14.894 "name": "BaseBdev4", 00:27:14.894 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:14.894 "is_configured": true, 00:27:14.894 "data_offset": 2048, 00:27:14.894 "data_size": 63488 00:27:14.894 } 00:27:14.894 ] 00:27:14.894 }' 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.894 [2024-12-09 23:08:50.222321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:14.894 [2024-12-09 23:08:50.224885] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:14.894 [2024-12-09 23:08:50.225038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.894 [2024-12-09 23:08:50.225058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:14.894 [2024-12-09 23:08:50.225065] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:14.894 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.155 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.155 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:15.155 "name": "raid_bdev1", 00:27:15.155 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:15.155 "strip_size_kb": 64, 00:27:15.155 "state": "online", 00:27:15.155 "raid_level": "raid5f", 00:27:15.155 "superblock": true, 00:27:15.155 "num_base_bdevs": 4, 00:27:15.155 "num_base_bdevs_discovered": 3, 00:27:15.155 "num_base_bdevs_operational": 3, 00:27:15.155 "base_bdevs_list": [ 00:27:15.155 { 00:27:15.155 "name": null, 00:27:15.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.155 "is_configured": false, 00:27:15.155 "data_offset": 0, 00:27:15.155 "data_size": 63488 00:27:15.155 }, 00:27:15.155 { 00:27:15.155 "name": "BaseBdev2", 00:27:15.155 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:15.155 "is_configured": true, 00:27:15.155 "data_offset": 2048, 00:27:15.155 "data_size": 63488 00:27:15.155 }, 00:27:15.155 { 00:27:15.155 "name": "BaseBdev3", 00:27:15.155 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:15.155 "is_configured": true, 00:27:15.155 "data_offset": 2048, 00:27:15.155 "data_size": 63488 00:27:15.155 }, 00:27:15.155 { 00:27:15.155 "name": "BaseBdev4", 00:27:15.155 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:15.155 "is_configured": true, 00:27:15.155 "data_offset": 2048, 00:27:15.155 "data_size": 63488 00:27:15.155 } 00:27:15.155 ] 00:27:15.155 }' 00:27:15.155 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:15.155 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:15.418 "name": "raid_bdev1", 00:27:15.418 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:15.418 "strip_size_kb": 64, 00:27:15.418 "state": "online", 00:27:15.418 "raid_level": "raid5f", 00:27:15.418 "superblock": true, 00:27:15.418 "num_base_bdevs": 4, 00:27:15.418 "num_base_bdevs_discovered": 3, 00:27:15.418 "num_base_bdevs_operational": 3, 00:27:15.418 "base_bdevs_list": [ 00:27:15.418 { 00:27:15.418 "name": null, 00:27:15.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.418 "is_configured": false, 00:27:15.418 "data_offset": 0, 00:27:15.418 "data_size": 63488 00:27:15.418 }, 00:27:15.418 { 00:27:15.418 "name": "BaseBdev2", 00:27:15.418 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:15.418 "is_configured": true, 00:27:15.418 "data_offset": 2048, 00:27:15.418 "data_size": 63488 00:27:15.418 }, 00:27:15.418 { 00:27:15.418 "name": "BaseBdev3", 00:27:15.418 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:15.418 "is_configured": true, 00:27:15.418 "data_offset": 2048, 00:27:15.418 "data_size": 63488 00:27:15.418 }, 00:27:15.418 { 00:27:15.418 "name": "BaseBdev4", 00:27:15.418 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:15.418 "is_configured": true, 00:27:15.418 "data_offset": 2048, 00:27:15.418 "data_size": 63488 00:27:15.418 } 00:27:15.418 ] 00:27:15.418 }' 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.418 [2024-12-09 23:08:50.653156] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:15.418 [2024-12-09 23:08:50.653204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:15.418 [2024-12-09 23:08:50.653222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:27:15.418 [2024-12-09 23:08:50.653229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:15.418 [2024-12-09 23:08:50.653591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:15.418 [2024-12-09 23:08:50.653609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:15.418 [2024-12-09 23:08:50.653672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:15.418 [2024-12-09 23:08:50.653682] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:15.418 [2024-12-09 23:08:50.653690] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:15.418 [2024-12-09 23:08:50.653697] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:15.418 BaseBdev1 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.418 23:08:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:16.361 "name": "raid_bdev1", 00:27:16.361 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:16.361 "strip_size_kb": 64, 00:27:16.361 "state": "online", 00:27:16.361 "raid_level": "raid5f", 00:27:16.361 "superblock": true, 00:27:16.361 "num_base_bdevs": 4, 00:27:16.361 "num_base_bdevs_discovered": 3, 00:27:16.361 "num_base_bdevs_operational": 3, 00:27:16.361 "base_bdevs_list": [ 00:27:16.361 { 00:27:16.361 "name": null, 00:27:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.361 "is_configured": false, 00:27:16.361 "data_offset": 0, 00:27:16.361 "data_size": 63488 00:27:16.361 }, 00:27:16.361 { 00:27:16.361 "name": "BaseBdev2", 00:27:16.361 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:16.361 "is_configured": true, 00:27:16.361 "data_offset": 2048, 00:27:16.361 "data_size": 63488 00:27:16.361 }, 00:27:16.361 { 00:27:16.361 "name": "BaseBdev3", 00:27:16.361 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:16.361 "is_configured": true, 00:27:16.361 "data_offset": 2048, 00:27:16.361 "data_size": 63488 00:27:16.361 }, 00:27:16.361 { 00:27:16.361 "name": "BaseBdev4", 00:27:16.361 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:16.361 "is_configured": true, 00:27:16.361 "data_offset": 2048, 00:27:16.361 "data_size": 63488 00:27:16.361 } 00:27:16.361 ] 00:27:16.361 }' 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:16.361 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.623 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.884 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:16.884 "name": "raid_bdev1", 00:27:16.884 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:16.884 "strip_size_kb": 64, 00:27:16.884 "state": "online", 00:27:16.884 "raid_level": "raid5f", 00:27:16.884 "superblock": true, 00:27:16.884 "num_base_bdevs": 4, 00:27:16.884 "num_base_bdevs_discovered": 3, 00:27:16.884 "num_base_bdevs_operational": 3, 00:27:16.884 "base_bdevs_list": [ 00:27:16.884 { 00:27:16.884 "name": null, 00:27:16.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.884 "is_configured": false, 00:27:16.884 "data_offset": 0, 00:27:16.884 "data_size": 63488 00:27:16.884 }, 00:27:16.884 { 00:27:16.884 "name": "BaseBdev2", 00:27:16.884 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:16.884 "is_configured": true, 00:27:16.884 "data_offset": 2048, 00:27:16.884 "data_size": 63488 00:27:16.884 }, 00:27:16.884 { 00:27:16.884 "name": "BaseBdev3", 00:27:16.884 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:16.884 "is_configured": true, 00:27:16.884 "data_offset": 2048, 00:27:16.884 "data_size": 63488 00:27:16.884 }, 00:27:16.884 { 00:27:16.884 "name": "BaseBdev4", 00:27:16.884 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:16.884 "is_configured": true, 00:27:16.884 "data_offset": 2048, 00:27:16.884 "data_size": 63488 00:27:16.884 } 00:27:16.884 ] 00:27:16.884 }' 00:27:16.884 23:08:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.884 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:16.884 [2024-12-09 23:08:52.073456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.884 [2024-12-09 23:08:52.073576] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:16.884 [2024-12-09 23:08:52.073588] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:16.884 request: 00:27:16.884 { 00:27:16.885 "base_bdev": "BaseBdev1", 00:27:16.885 "raid_bdev": "raid_bdev1", 00:27:16.885 "method": "bdev_raid_add_base_bdev", 00:27:16.885 "req_id": 1 00:27:16.885 } 00:27:16.885 Got JSON-RPC error response 00:27:16.885 response: 00:27:16.885 { 00:27:16.885 "code": -22, 00:27:16.885 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:16.885 } 00:27:16.885 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:16.885 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:27:16.885 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:16.885 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:16.885 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:16.885 23:08:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.831 "name": "raid_bdev1", 00:27:17.831 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:17.831 "strip_size_kb": 64, 00:27:17.831 "state": "online", 00:27:17.831 "raid_level": "raid5f", 00:27:17.831 "superblock": true, 00:27:17.831 "num_base_bdevs": 4, 00:27:17.831 "num_base_bdevs_discovered": 3, 00:27:17.831 "num_base_bdevs_operational": 3, 00:27:17.831 "base_bdevs_list": [ 00:27:17.831 { 00:27:17.831 "name": null, 00:27:17.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.831 "is_configured": false, 00:27:17.831 "data_offset": 0, 00:27:17.831 "data_size": 63488 00:27:17.831 }, 00:27:17.831 { 00:27:17.831 "name": "BaseBdev2", 00:27:17.831 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:17.831 "is_configured": true, 00:27:17.831 "data_offset": 2048, 00:27:17.831 "data_size": 63488 00:27:17.831 }, 00:27:17.831 { 00:27:17.831 "name": "BaseBdev3", 00:27:17.831 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:17.831 "is_configured": true, 00:27:17.831 "data_offset": 2048, 00:27:17.831 "data_size": 63488 00:27:17.831 }, 00:27:17.831 { 00:27:17.831 "name": "BaseBdev4", 00:27:17.831 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:17.831 "is_configured": true, 00:27:17.831 "data_offset": 2048, 00:27:17.831 "data_size": 63488 00:27:17.831 } 00:27:17.831 ] 00:27:17.831 }' 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.831 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:18.131 "name": "raid_bdev1", 00:27:18.131 "uuid": "32b90177-099d-4203-b4d6-43dc2785d18b", 00:27:18.131 "strip_size_kb": 64, 00:27:18.131 "state": "online", 00:27:18.131 "raid_level": "raid5f", 00:27:18.131 "superblock": true, 00:27:18.131 "num_base_bdevs": 4, 00:27:18.131 "num_base_bdevs_discovered": 3, 00:27:18.131 "num_base_bdevs_operational": 3, 00:27:18.131 "base_bdevs_list": [ 00:27:18.131 { 00:27:18.131 "name": null, 00:27:18.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.131 "is_configured": false, 00:27:18.131 "data_offset": 0, 00:27:18.131 "data_size": 63488 00:27:18.131 }, 00:27:18.131 { 00:27:18.131 "name": "BaseBdev2", 00:27:18.131 "uuid": "f82821af-68c1-5aea-8c5e-7ee10a7e8be4", 00:27:18.131 "is_configured": true, 00:27:18.131 "data_offset": 2048, 00:27:18.131 "data_size": 63488 00:27:18.131 }, 00:27:18.131 { 00:27:18.131 "name": "BaseBdev3", 00:27:18.131 "uuid": "3eee5fa6-8c08-5a42-b6a9-13bd3ddc0c44", 00:27:18.131 "is_configured": true, 00:27:18.131 "data_offset": 2048, 00:27:18.131 "data_size": 63488 00:27:18.131 }, 00:27:18.131 { 00:27:18.131 "name": "BaseBdev4", 00:27:18.131 "uuid": "55306c4d-f053-5701-a802-7ffc63782d72", 00:27:18.131 "is_configured": true, 00:27:18.131 "data_offset": 2048, 00:27:18.131 "data_size": 63488 00:27:18.131 } 00:27:18.131 ] 00:27:18.131 }' 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:18.131 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82695 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82695 ']' 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82695 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82695 00:27:18.392 killing process with pid 82695 00:27:18.392 Received shutdown signal, test time was about 60.000000 seconds 00:27:18.392 00:27:18.392 Latency(us) 00:27:18.392 [2024-12-09T23:08:53.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.392 [2024-12-09T23:08:53.755Z] =================================================================================================================== 00:27:18.392 [2024-12-09T23:08:53.755Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82695' 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82695 00:27:18.392 [2024-12-09 23:08:53.553132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:18.392 23:08:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82695 00:27:18.392 [2024-12-09 23:08:53.553224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.392 [2024-12-09 23:08:53.553285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.392 [2024-12-09 23:08:53.553295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:18.655 [2024-12-09 23:08:53.794880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.230 23:08:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:27:19.230 00:27:19.230 real 0m24.611s 00:27:19.230 user 0m29.856s 00:27:19.230 sys 0m2.237s 00:27:19.230 ************************************ 00:27:19.230 END TEST raid5f_rebuild_test_sb 00:27:19.230 ************************************ 00:27:19.230 23:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.230 23:08:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:19.230 23:08:54 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:27:19.230 23:08:54 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:27:19.230 23:08:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:19.230 23:08:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.230 23:08:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.230 ************************************ 00:27:19.230 START TEST raid_state_function_test_sb_4k 00:27:19.230 ************************************ 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:19.230 Process raid pid: 83501 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83501 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83501' 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83501 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 83501 ']' 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:19.230 23:08:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.230 [2024-12-09 23:08:54.468670] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:27:19.230 [2024-12-09 23:08:54.468931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.492 [2024-12-09 23:08:54.631332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.492 [2024-12-09 23:08:54.737206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.753 [2024-12-09 23:08:54.878626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:19.753 [2024-12-09 23:08:54.878663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.012 [2024-12-09 23:08:55.320944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:20.012 [2024-12-09 23:08:55.321124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:20.012 [2024-12-09 23:08:55.321148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:20.012 [2024-12-09 23:08:55.321160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.012 "name": "Existed_Raid", 00:27:20.012 "uuid": "93b5e826-2d84-4a95-ab79-1ca2ca42814c", 00:27:20.012 "strip_size_kb": 0, 00:27:20.012 "state": "configuring", 00:27:20.012 "raid_level": "raid1", 00:27:20.012 "superblock": true, 00:27:20.012 "num_base_bdevs": 2, 00:27:20.012 "num_base_bdevs_discovered": 0, 00:27:20.012 "num_base_bdevs_operational": 2, 00:27:20.012 "base_bdevs_list": [ 00:27:20.012 { 00:27:20.012 "name": "BaseBdev1", 00:27:20.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.012 "is_configured": false, 00:27:20.012 "data_offset": 0, 00:27:20.012 "data_size": 0 00:27:20.012 }, 00:27:20.012 { 00:27:20.012 "name": "BaseBdev2", 00:27:20.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.012 "is_configured": false, 00:27:20.012 "data_offset": 0, 00:27:20.012 "data_size": 0 00:27:20.012 } 00:27:20.012 ] 00:27:20.012 }' 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.012 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.274 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:20.274 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.274 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.274 [2024-12-09 23:08:55.612950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:20.275 [2024-12-09 23:08:55.613094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.275 [2024-12-09 23:08:55.620948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:20.275 [2024-12-09 23:08:55.620989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:20.275 [2024-12-09 23:08:55.620998] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:20.275 [2024-12-09 23:08:55.621009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.275 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.545 [2024-12-09 23:08:55.653766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:20.545 BaseBdev1 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.545 [ 00:27:20.545 { 00:27:20.545 "name": "BaseBdev1", 00:27:20.545 "aliases": [ 00:27:20.545 "296b7b6b-85f5-46c7-9837-d009b8edb441" 00:27:20.545 ], 00:27:20.545 "product_name": "Malloc disk", 00:27:20.545 "block_size": 4096, 00:27:20.545 "num_blocks": 8192, 00:27:20.545 "uuid": "296b7b6b-85f5-46c7-9837-d009b8edb441", 00:27:20.545 "assigned_rate_limits": { 00:27:20.545 "rw_ios_per_sec": 0, 00:27:20.545 "rw_mbytes_per_sec": 0, 00:27:20.545 "r_mbytes_per_sec": 0, 00:27:20.545 "w_mbytes_per_sec": 0 00:27:20.545 }, 00:27:20.545 "claimed": true, 00:27:20.545 "claim_type": "exclusive_write", 00:27:20.545 "zoned": false, 00:27:20.545 "supported_io_types": { 00:27:20.545 "read": true, 00:27:20.545 "write": true, 00:27:20.545 "unmap": true, 00:27:20.545 "flush": true, 00:27:20.545 "reset": true, 00:27:20.545 "nvme_admin": false, 00:27:20.545 "nvme_io": false, 00:27:20.545 "nvme_io_md": false, 00:27:20.545 "write_zeroes": true, 00:27:20.545 "zcopy": true, 00:27:20.545 "get_zone_info": false, 00:27:20.545 "zone_management": false, 00:27:20.545 "zone_append": false, 00:27:20.545 "compare": false, 00:27:20.545 "compare_and_write": false, 00:27:20.545 "abort": true, 00:27:20.545 "seek_hole": false, 00:27:20.545 "seek_data": false, 00:27:20.545 "copy": true, 00:27:20.545 "nvme_iov_md": false 00:27:20.545 }, 00:27:20.545 "memory_domains": [ 00:27:20.545 { 00:27:20.545 "dma_device_id": "system", 00:27:20.545 "dma_device_type": 1 00:27:20.545 }, 00:27:20.545 { 00:27:20.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.545 "dma_device_type": 2 00:27:20.545 } 00:27:20.545 ], 00:27:20.545 "driver_specific": {} 00:27:20.545 } 00:27:20.545 ] 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.545 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.546 "name": "Existed_Raid", 00:27:20.546 "uuid": "7250e06f-82db-4f89-9521-10d6992d8ab0", 00:27:20.546 "strip_size_kb": 0, 00:27:20.546 "state": "configuring", 00:27:20.546 "raid_level": "raid1", 00:27:20.546 "superblock": true, 00:27:20.546 "num_base_bdevs": 2, 00:27:20.546 "num_base_bdevs_discovered": 1, 00:27:20.546 "num_base_bdevs_operational": 2, 00:27:20.546 "base_bdevs_list": [ 00:27:20.546 { 00:27:20.546 "name": "BaseBdev1", 00:27:20.546 "uuid": "296b7b6b-85f5-46c7-9837-d009b8edb441", 00:27:20.546 "is_configured": true, 00:27:20.546 "data_offset": 256, 00:27:20.546 "data_size": 7936 00:27:20.546 }, 00:27:20.546 { 00:27:20.546 "name": "BaseBdev2", 00:27:20.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.546 "is_configured": false, 00:27:20.546 "data_offset": 0, 00:27:20.546 "data_size": 0 00:27:20.546 } 00:27:20.546 ] 00:27:20.546 }' 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.546 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 [2024-12-09 23:08:55.993884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:20.806 [2024-12-09 23:08:55.993932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.806 23:08:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 [2024-12-09 23:08:56.001935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:20.806 [2024-12-09 23:08:56.003788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:20.806 [2024-12-09 23:08:56.003942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.806 "name": "Existed_Raid", 00:27:20.806 "uuid": "f0dc0f5d-37b1-49b9-818c-98f793f590e0", 00:27:20.806 "strip_size_kb": 0, 00:27:20.806 "state": "configuring", 00:27:20.806 "raid_level": "raid1", 00:27:20.806 "superblock": true, 00:27:20.806 "num_base_bdevs": 2, 00:27:20.806 "num_base_bdevs_discovered": 1, 00:27:20.806 "num_base_bdevs_operational": 2, 00:27:20.806 "base_bdevs_list": [ 00:27:20.806 { 00:27:20.806 "name": "BaseBdev1", 00:27:20.806 "uuid": "296b7b6b-85f5-46c7-9837-d009b8edb441", 00:27:20.806 "is_configured": true, 00:27:20.806 "data_offset": 256, 00:27:20.806 "data_size": 7936 00:27:20.806 }, 00:27:20.806 { 00:27:20.806 "name": "BaseBdev2", 00:27:20.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.806 "is_configured": false, 00:27:20.806 "data_offset": 0, 00:27:20.806 "data_size": 0 00:27:20.806 } 00:27:20.806 ] 00:27:20.806 }' 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.806 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.066 [2024-12-09 23:08:56.348629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:21.066 [2024-12-09 23:08:56.348847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:21.066 [2024-12-09 23:08:56.348860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:21.066 BaseBdev2 00:27:21.066 [2024-12-09 23:08:56.349137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:21.066 [2024-12-09 23:08:56.349282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:21.066 [2024-12-09 23:08:56.349295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:21.066 [2024-12-09 23:08:56.349423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.066 [ 00:27:21.066 { 00:27:21.066 "name": "BaseBdev2", 00:27:21.066 "aliases": [ 00:27:21.066 "5df8fbaa-5b3a-42a8-b7ee-ec5837b68565" 00:27:21.066 ], 00:27:21.066 "product_name": "Malloc disk", 00:27:21.066 "block_size": 4096, 00:27:21.066 "num_blocks": 8192, 00:27:21.066 "uuid": "5df8fbaa-5b3a-42a8-b7ee-ec5837b68565", 00:27:21.066 "assigned_rate_limits": { 00:27:21.066 "rw_ios_per_sec": 0, 00:27:21.066 "rw_mbytes_per_sec": 0, 00:27:21.066 "r_mbytes_per_sec": 0, 00:27:21.066 "w_mbytes_per_sec": 0 00:27:21.066 }, 00:27:21.066 "claimed": true, 00:27:21.066 "claim_type": "exclusive_write", 00:27:21.066 "zoned": false, 00:27:21.066 "supported_io_types": { 00:27:21.066 "read": true, 00:27:21.066 "write": true, 00:27:21.066 "unmap": true, 00:27:21.066 "flush": true, 00:27:21.066 "reset": true, 00:27:21.066 "nvme_admin": false, 00:27:21.066 "nvme_io": false, 00:27:21.066 "nvme_io_md": false, 00:27:21.066 "write_zeroes": true, 00:27:21.066 "zcopy": true, 00:27:21.066 "get_zone_info": false, 00:27:21.066 "zone_management": false, 00:27:21.066 "zone_append": false, 00:27:21.066 "compare": false, 00:27:21.066 "compare_and_write": false, 00:27:21.066 "abort": true, 00:27:21.066 "seek_hole": false, 00:27:21.066 "seek_data": false, 00:27:21.066 "copy": true, 00:27:21.066 "nvme_iov_md": false 00:27:21.066 }, 00:27:21.066 "memory_domains": [ 00:27:21.066 { 00:27:21.066 "dma_device_id": "system", 00:27:21.066 "dma_device_type": 1 00:27:21.066 }, 00:27:21.066 { 00:27:21.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.066 "dma_device_type": 2 00:27:21.066 } 00:27:21.066 ], 00:27:21.066 "driver_specific": {} 00:27:21.066 } 00:27:21.066 ] 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:21.066 "name": "Existed_Raid", 00:27:21.066 "uuid": "f0dc0f5d-37b1-49b9-818c-98f793f590e0", 00:27:21.066 "strip_size_kb": 0, 00:27:21.066 "state": "online", 00:27:21.066 "raid_level": "raid1", 00:27:21.066 "superblock": true, 00:27:21.066 "num_base_bdevs": 2, 00:27:21.066 "num_base_bdevs_discovered": 2, 00:27:21.066 "num_base_bdevs_operational": 2, 00:27:21.066 "base_bdevs_list": [ 00:27:21.066 { 00:27:21.066 "name": "BaseBdev1", 00:27:21.066 "uuid": "296b7b6b-85f5-46c7-9837-d009b8edb441", 00:27:21.066 "is_configured": true, 00:27:21.066 "data_offset": 256, 00:27:21.066 "data_size": 7936 00:27:21.066 }, 00:27:21.066 { 00:27:21.066 "name": "BaseBdev2", 00:27:21.066 "uuid": "5df8fbaa-5b3a-42a8-b7ee-ec5837b68565", 00:27:21.066 "is_configured": true, 00:27:21.066 "data_offset": 256, 00:27:21.066 "data_size": 7936 00:27:21.066 } 00:27:21.066 ] 00:27:21.066 }' 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:21.066 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.633 [2024-12-09 23:08:56.693047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.633 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.633 "name": "Existed_Raid", 00:27:21.633 "aliases": [ 00:27:21.633 "f0dc0f5d-37b1-49b9-818c-98f793f590e0" 00:27:21.633 ], 00:27:21.633 "product_name": "Raid Volume", 00:27:21.633 "block_size": 4096, 00:27:21.633 "num_blocks": 7936, 00:27:21.633 "uuid": "f0dc0f5d-37b1-49b9-818c-98f793f590e0", 00:27:21.633 "assigned_rate_limits": { 00:27:21.633 "rw_ios_per_sec": 0, 00:27:21.633 "rw_mbytes_per_sec": 0, 00:27:21.633 "r_mbytes_per_sec": 0, 00:27:21.633 "w_mbytes_per_sec": 0 00:27:21.633 }, 00:27:21.633 "claimed": false, 00:27:21.633 "zoned": false, 00:27:21.633 "supported_io_types": { 00:27:21.633 "read": true, 00:27:21.633 "write": true, 00:27:21.633 "unmap": false, 00:27:21.633 "flush": false, 00:27:21.633 "reset": true, 00:27:21.633 "nvme_admin": false, 00:27:21.633 "nvme_io": false, 00:27:21.633 "nvme_io_md": false, 00:27:21.633 "write_zeroes": true, 00:27:21.633 "zcopy": false, 00:27:21.633 "get_zone_info": false, 00:27:21.633 "zone_management": false, 00:27:21.633 "zone_append": false, 00:27:21.633 "compare": false, 00:27:21.633 "compare_and_write": false, 00:27:21.633 "abort": false, 00:27:21.633 "seek_hole": false, 00:27:21.633 "seek_data": false, 00:27:21.634 "copy": false, 00:27:21.634 "nvme_iov_md": false 00:27:21.634 }, 00:27:21.634 "memory_domains": [ 00:27:21.634 { 00:27:21.634 "dma_device_id": "system", 00:27:21.634 "dma_device_type": 1 00:27:21.634 }, 00:27:21.634 { 00:27:21.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.634 "dma_device_type": 2 00:27:21.634 }, 00:27:21.634 { 00:27:21.634 "dma_device_id": "system", 00:27:21.634 "dma_device_type": 1 00:27:21.634 }, 00:27:21.634 { 00:27:21.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:21.634 "dma_device_type": 2 00:27:21.634 } 00:27:21.634 ], 00:27:21.634 "driver_specific": { 00:27:21.634 "raid": { 00:27:21.634 "uuid": "f0dc0f5d-37b1-49b9-818c-98f793f590e0", 00:27:21.634 "strip_size_kb": 0, 00:27:21.634 "state": "online", 00:27:21.634 "raid_level": "raid1", 00:27:21.634 "superblock": true, 00:27:21.634 "num_base_bdevs": 2, 00:27:21.634 "num_base_bdevs_discovered": 2, 00:27:21.634 "num_base_bdevs_operational": 2, 00:27:21.634 "base_bdevs_list": [ 00:27:21.634 { 00:27:21.634 "name": "BaseBdev1", 00:27:21.634 "uuid": "296b7b6b-85f5-46c7-9837-d009b8edb441", 00:27:21.634 "is_configured": true, 00:27:21.634 "data_offset": 256, 00:27:21.634 "data_size": 7936 00:27:21.634 }, 00:27:21.634 { 00:27:21.634 "name": "BaseBdev2", 00:27:21.634 "uuid": "5df8fbaa-5b3a-42a8-b7ee-ec5837b68565", 00:27:21.634 "is_configured": true, 00:27:21.634 "data_offset": 256, 00:27:21.634 "data_size": 7936 00:27:21.634 } 00:27:21.634 ] 00:27:21.634 } 00:27:21.634 } 00:27:21.634 }' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:21.634 BaseBdev2' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.634 [2024-12-09 23:08:56.852817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:21.634 "name": "Existed_Raid", 00:27:21.634 "uuid": "f0dc0f5d-37b1-49b9-818c-98f793f590e0", 00:27:21.634 "strip_size_kb": 0, 00:27:21.634 "state": "online", 00:27:21.634 "raid_level": "raid1", 00:27:21.634 "superblock": true, 00:27:21.634 "num_base_bdevs": 2, 00:27:21.634 "num_base_bdevs_discovered": 1, 00:27:21.634 "num_base_bdevs_operational": 1, 00:27:21.634 "base_bdevs_list": [ 00:27:21.634 { 00:27:21.634 "name": null, 00:27:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.634 "is_configured": false, 00:27:21.634 "data_offset": 0, 00:27:21.634 "data_size": 7936 00:27:21.634 }, 00:27:21.634 { 00:27:21.634 "name": "BaseBdev2", 00:27:21.634 "uuid": "5df8fbaa-5b3a-42a8-b7ee-ec5837b68565", 00:27:21.634 "is_configured": true, 00:27:21.634 "data_offset": 256, 00:27:21.634 "data_size": 7936 00:27:21.634 } 00:27:21.634 ] 00:27:21.634 }' 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:21.634 23:08:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.894 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:21.894 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:21.894 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:21.894 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.894 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.895 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.895 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.156 [2024-12-09 23:08:57.272159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:22.156 [2024-12-09 23:08:57.272259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:22.156 [2024-12-09 23:08:57.330702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:22.156 [2024-12-09 23:08:57.330926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:22.156 [2024-12-09 23:08:57.330947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83501 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 83501 ']' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 83501 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83501 00:27:22.156 killing process with pid 83501 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83501' 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 83501 00:27:22.156 [2024-12-09 23:08:57.390340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:22.156 23:08:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 83501 00:27:22.156 [2024-12-09 23:08:57.400802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:23.101 23:08:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:27:23.101 00:27:23.101 real 0m3.785s 00:27:23.101 user 0m5.414s 00:27:23.101 sys 0m0.587s 00:27:23.101 23:08:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.101 23:08:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.101 ************************************ 00:27:23.101 END TEST raid_state_function_test_sb_4k 00:27:23.101 ************************************ 00:27:23.101 23:08:58 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:27:23.101 23:08:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.101 23:08:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.101 23:08:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:23.101 ************************************ 00:27:23.101 START TEST raid_superblock_test_4k 00:27:23.101 ************************************ 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:23.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83731 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83731 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 83731 ']' 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:23.101 23:08:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.101 [2024-12-09 23:08:58.293326] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:27:23.102 [2024-12-09 23:08:58.293613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83731 ] 00:27:23.102 [2024-12-09 23:08:58.452117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.363 [2024-12-09 23:08:58.555618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.363 [2024-12-09 23:08:58.691970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:23.363 [2024-12-09 23:08:58.692130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:23.935 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.936 malloc1 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.936 [2024-12-09 23:08:59.164122] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:23.936 [2024-12-09 23:08:59.164178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.936 [2024-12-09 23:08:59.164199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:23.936 [2024-12-09 23:08:59.164209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.936 [2024-12-09 23:08:59.166327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.936 [2024-12-09 23:08:59.166477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:23.936 pt1 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.936 malloc2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.936 [2024-12-09 23:08:59.200123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:23.936 [2024-12-09 23:08:59.200170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.936 [2024-12-09 23:08:59.200192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:23.936 [2024-12-09 23:08:59.200201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.936 [2024-12-09 23:08:59.202742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.936 [2024-12-09 23:08:59.202775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:23.936 pt2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.936 [2024-12-09 23:08:59.208183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:23.936 [2024-12-09 23:08:59.210018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:23.936 [2024-12-09 23:08:59.210199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:23.936 [2024-12-09 23:08:59.210222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:23.936 [2024-12-09 23:08:59.210482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:23.936 [2024-12-09 23:08:59.210632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:23.936 [2024-12-09 23:08:59.210651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:23.936 [2024-12-09 23:08:59.210792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:23.936 "name": "raid_bdev1", 00:27:23.936 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:23.936 "strip_size_kb": 0, 00:27:23.936 "state": "online", 00:27:23.936 "raid_level": "raid1", 00:27:23.936 "superblock": true, 00:27:23.936 "num_base_bdevs": 2, 00:27:23.936 "num_base_bdevs_discovered": 2, 00:27:23.936 "num_base_bdevs_operational": 2, 00:27:23.936 "base_bdevs_list": [ 00:27:23.936 { 00:27:23.936 "name": "pt1", 00:27:23.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:23.936 "is_configured": true, 00:27:23.936 "data_offset": 256, 00:27:23.936 "data_size": 7936 00:27:23.936 }, 00:27:23.936 { 00:27:23.936 "name": "pt2", 00:27:23.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:23.936 "is_configured": true, 00:27:23.936 "data_offset": 256, 00:27:23.936 "data_size": 7936 00:27:23.936 } 00:27:23.936 ] 00:27:23.936 }' 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:23.936 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.196 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:24.196 [2024-12-09 23:08:59.520698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.197 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.197 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:24.197 "name": "raid_bdev1", 00:27:24.197 "aliases": [ 00:27:24.197 "ec656b3f-c9f3-40fd-bf31-890c84e1469c" 00:27:24.197 ], 00:27:24.197 "product_name": "Raid Volume", 00:27:24.197 "block_size": 4096, 00:27:24.197 "num_blocks": 7936, 00:27:24.197 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:24.197 "assigned_rate_limits": { 00:27:24.197 "rw_ios_per_sec": 0, 00:27:24.197 "rw_mbytes_per_sec": 0, 00:27:24.197 "r_mbytes_per_sec": 0, 00:27:24.197 "w_mbytes_per_sec": 0 00:27:24.197 }, 00:27:24.197 "claimed": false, 00:27:24.197 "zoned": false, 00:27:24.197 "supported_io_types": { 00:27:24.197 "read": true, 00:27:24.197 "write": true, 00:27:24.197 "unmap": false, 00:27:24.197 "flush": false, 00:27:24.197 "reset": true, 00:27:24.197 "nvme_admin": false, 00:27:24.197 "nvme_io": false, 00:27:24.197 "nvme_io_md": false, 00:27:24.197 "write_zeroes": true, 00:27:24.197 "zcopy": false, 00:27:24.197 "get_zone_info": false, 00:27:24.197 "zone_management": false, 00:27:24.197 "zone_append": false, 00:27:24.197 "compare": false, 00:27:24.197 "compare_and_write": false, 00:27:24.197 "abort": false, 00:27:24.197 "seek_hole": false, 00:27:24.197 "seek_data": false, 00:27:24.197 "copy": false, 00:27:24.197 "nvme_iov_md": false 00:27:24.197 }, 00:27:24.197 "memory_domains": [ 00:27:24.197 { 00:27:24.197 "dma_device_id": "system", 00:27:24.197 "dma_device_type": 1 00:27:24.197 }, 00:27:24.197 { 00:27:24.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.197 "dma_device_type": 2 00:27:24.197 }, 00:27:24.197 { 00:27:24.197 "dma_device_id": "system", 00:27:24.197 "dma_device_type": 1 00:27:24.197 }, 00:27:24.197 { 00:27:24.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.197 "dma_device_type": 2 00:27:24.197 } 00:27:24.197 ], 00:27:24.197 "driver_specific": { 00:27:24.197 "raid": { 00:27:24.197 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:24.197 "strip_size_kb": 0, 00:27:24.197 "state": "online", 00:27:24.197 "raid_level": "raid1", 00:27:24.197 "superblock": true, 00:27:24.197 "num_base_bdevs": 2, 00:27:24.197 "num_base_bdevs_discovered": 2, 00:27:24.197 "num_base_bdevs_operational": 2, 00:27:24.197 "base_bdevs_list": [ 00:27:24.197 { 00:27:24.197 "name": "pt1", 00:27:24.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:24.197 "is_configured": true, 00:27:24.197 "data_offset": 256, 00:27:24.197 "data_size": 7936 00:27:24.197 }, 00:27:24.197 { 00:27:24.197 "name": "pt2", 00:27:24.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:24.197 "is_configured": true, 00:27:24.197 "data_offset": 256, 00:27:24.197 "data_size": 7936 00:27:24.197 } 00:27:24.197 ] 00:27:24.197 } 00:27:24.197 } 00:27:24.197 }' 00:27:24.197 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:24.460 pt2' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:24.460 [2024-12-09 23:08:59.684580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.460 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ec656b3f-c9f3-40fd-bf31-890c84e1469c 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z ec656b3f-c9f3-40fd-bf31-890c84e1469c ']' 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.461 [2024-12-09 23:08:59.728261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:24.461 [2024-12-09 23:08:59.728287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:24.461 [2024-12-09 23:08:59.728357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:24.461 [2024-12-09 23:08:59.728417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:24.461 [2024-12-09 23:08:59.728436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.461 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.722 [2024-12-09 23:08:59.824328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:24.722 [2024-12-09 23:08:59.826237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:24.722 [2024-12-09 23:08:59.826305] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:24.722 [2024-12-09 23:08:59.826353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:24.722 [2024-12-09 23:08:59.826367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:24.722 [2024-12-09 23:08:59.826377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:24.722 request: 00:27:24.722 { 00:27:24.722 "name": "raid_bdev1", 00:27:24.722 "raid_level": "raid1", 00:27:24.722 "base_bdevs": [ 00:27:24.722 "malloc1", 00:27:24.722 "malloc2" 00:27:24.722 ], 00:27:24.722 "superblock": false, 00:27:24.722 "method": "bdev_raid_create", 00:27:24.722 "req_id": 1 00:27:24.722 } 00:27:24.722 Got JSON-RPC error response 00:27:24.722 response: 00:27:24.722 { 00:27:24.722 "code": -17, 00:27:24.722 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:24.722 } 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.722 [2024-12-09 23:08:59.868303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:24.722 [2024-12-09 23:08:59.868353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.722 [2024-12-09 23:08:59.868372] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:24.722 [2024-12-09 23:08:59.868383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.722 [2024-12-09 23:08:59.870567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.722 [2024-12-09 23:08:59.870599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:24.722 [2024-12-09 23:08:59.870675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:24.722 [2024-12-09 23:08:59.870727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:24.722 pt1 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.722 "name": "raid_bdev1", 00:27:24.722 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:24.722 "strip_size_kb": 0, 00:27:24.722 "state": "configuring", 00:27:24.722 "raid_level": "raid1", 00:27:24.722 "superblock": true, 00:27:24.722 "num_base_bdevs": 2, 00:27:24.722 "num_base_bdevs_discovered": 1, 00:27:24.722 "num_base_bdevs_operational": 2, 00:27:24.722 "base_bdevs_list": [ 00:27:24.722 { 00:27:24.722 "name": "pt1", 00:27:24.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:24.722 "is_configured": true, 00:27:24.722 "data_offset": 256, 00:27:24.722 "data_size": 7936 00:27:24.722 }, 00:27:24.722 { 00:27:24.722 "name": null, 00:27:24.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:24.722 "is_configured": false, 00:27:24.722 "data_offset": 256, 00:27:24.722 "data_size": 7936 00:27:24.722 } 00:27:24.722 ] 00:27:24.722 }' 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.722 23:08:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.025 [2024-12-09 23:09:00.180422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:25.025 [2024-12-09 23:09:00.180494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.025 [2024-12-09 23:09:00.180513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:25.025 [2024-12-09 23:09:00.180531] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.025 [2024-12-09 23:09:00.180948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.025 [2024-12-09 23:09:00.180979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:25.025 [2024-12-09 23:09:00.181050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:25.025 [2024-12-09 23:09:00.181075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:25.025 [2024-12-09 23:09:00.181200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:25.025 [2024-12-09 23:09:00.181220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:25.025 [2024-12-09 23:09:00.181461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:25.025 [2024-12-09 23:09:00.181604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:25.025 [2024-12-09 23:09:00.181618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:25.025 [2024-12-09 23:09:00.181748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.025 pt2 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.025 "name": "raid_bdev1", 00:27:25.025 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:25.025 "strip_size_kb": 0, 00:27:25.025 "state": "online", 00:27:25.025 "raid_level": "raid1", 00:27:25.025 "superblock": true, 00:27:25.025 "num_base_bdevs": 2, 00:27:25.025 "num_base_bdevs_discovered": 2, 00:27:25.025 "num_base_bdevs_operational": 2, 00:27:25.025 "base_bdevs_list": [ 00:27:25.025 { 00:27:25.025 "name": "pt1", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.025 "is_configured": true, 00:27:25.025 "data_offset": 256, 00:27:25.025 "data_size": 7936 00:27:25.025 }, 00:27:25.025 { 00:27:25.025 "name": "pt2", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.025 "is_configured": true, 00:27:25.025 "data_offset": 256, 00:27:25.025 "data_size": 7936 00:27:25.025 } 00:27:25.025 ] 00:27:25.025 }' 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.025 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.308 [2024-12-09 23:09:00.496753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.308 "name": "raid_bdev1", 00:27:25.308 "aliases": [ 00:27:25.308 "ec656b3f-c9f3-40fd-bf31-890c84e1469c" 00:27:25.308 ], 00:27:25.308 "product_name": "Raid Volume", 00:27:25.308 "block_size": 4096, 00:27:25.308 "num_blocks": 7936, 00:27:25.308 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:25.308 "assigned_rate_limits": { 00:27:25.308 "rw_ios_per_sec": 0, 00:27:25.308 "rw_mbytes_per_sec": 0, 00:27:25.308 "r_mbytes_per_sec": 0, 00:27:25.308 "w_mbytes_per_sec": 0 00:27:25.308 }, 00:27:25.308 "claimed": false, 00:27:25.308 "zoned": false, 00:27:25.308 "supported_io_types": { 00:27:25.308 "read": true, 00:27:25.308 "write": true, 00:27:25.308 "unmap": false, 00:27:25.308 "flush": false, 00:27:25.308 "reset": true, 00:27:25.308 "nvme_admin": false, 00:27:25.308 "nvme_io": false, 00:27:25.308 "nvme_io_md": false, 00:27:25.308 "write_zeroes": true, 00:27:25.308 "zcopy": false, 00:27:25.308 "get_zone_info": false, 00:27:25.308 "zone_management": false, 00:27:25.308 "zone_append": false, 00:27:25.308 "compare": false, 00:27:25.308 "compare_and_write": false, 00:27:25.308 "abort": false, 00:27:25.308 "seek_hole": false, 00:27:25.308 "seek_data": false, 00:27:25.308 "copy": false, 00:27:25.308 "nvme_iov_md": false 00:27:25.308 }, 00:27:25.308 "memory_domains": [ 00:27:25.308 { 00:27:25.308 "dma_device_id": "system", 00:27:25.308 "dma_device_type": 1 00:27:25.308 }, 00:27:25.308 { 00:27:25.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.308 "dma_device_type": 2 00:27:25.308 }, 00:27:25.308 { 00:27:25.308 "dma_device_id": "system", 00:27:25.308 "dma_device_type": 1 00:27:25.308 }, 00:27:25.308 { 00:27:25.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.308 "dma_device_type": 2 00:27:25.308 } 00:27:25.308 ], 00:27:25.308 "driver_specific": { 00:27:25.308 "raid": { 00:27:25.308 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:25.308 "strip_size_kb": 0, 00:27:25.308 "state": "online", 00:27:25.308 "raid_level": "raid1", 00:27:25.308 "superblock": true, 00:27:25.308 "num_base_bdevs": 2, 00:27:25.308 "num_base_bdevs_discovered": 2, 00:27:25.308 "num_base_bdevs_operational": 2, 00:27:25.308 "base_bdevs_list": [ 00:27:25.308 { 00:27:25.308 "name": "pt1", 00:27:25.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:25.308 "is_configured": true, 00:27:25.308 "data_offset": 256, 00:27:25.308 "data_size": 7936 00:27:25.308 }, 00:27:25.308 { 00:27:25.308 "name": "pt2", 00:27:25.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.308 "is_configured": true, 00:27:25.308 "data_offset": 256, 00:27:25.308 "data_size": 7936 00:27:25.308 } 00:27:25.308 ] 00:27:25.308 } 00:27:25.308 } 00:27:25.308 }' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:25.308 pt2' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.308 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:25.308 [2024-12-09 23:09:00.668787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' ec656b3f-c9f3-40fd-bf31-890c84e1469c '!=' ec656b3f-c9f3-40fd-bf31-890c84e1469c ']' 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.570 [2024-12-09 23:09:00.704593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.570 "name": "raid_bdev1", 00:27:25.570 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:25.570 "strip_size_kb": 0, 00:27:25.570 "state": "online", 00:27:25.570 "raid_level": "raid1", 00:27:25.570 "superblock": true, 00:27:25.570 "num_base_bdevs": 2, 00:27:25.570 "num_base_bdevs_discovered": 1, 00:27:25.570 "num_base_bdevs_operational": 1, 00:27:25.570 "base_bdevs_list": [ 00:27:25.570 { 00:27:25.570 "name": null, 00:27:25.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.570 "is_configured": false, 00:27:25.570 "data_offset": 0, 00:27:25.570 "data_size": 7936 00:27:25.570 }, 00:27:25.570 { 00:27:25.570 "name": "pt2", 00:27:25.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.570 "is_configured": true, 00:27:25.570 "data_offset": 256, 00:27:25.570 "data_size": 7936 00:27:25.570 } 00:27:25.570 ] 00:27:25.570 }' 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.570 23:09:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.832 [2024-12-09 23:09:01.036619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.832 [2024-12-09 23:09:01.036647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:25.832 [2024-12-09 23:09:01.036710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:25.832 [2024-12-09 23:09:01.036755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:25.832 [2024-12-09 23:09:01.036766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.832 [2024-12-09 23:09:01.084619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:25.832 [2024-12-09 23:09:01.084670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.832 [2024-12-09 23:09:01.084685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:25.832 [2024-12-09 23:09:01.084695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.832 [2024-12-09 23:09:01.086882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.832 [2024-12-09 23:09:01.086920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:25.832 [2024-12-09 23:09:01.086989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:25.832 [2024-12-09 23:09:01.087032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:25.832 [2024-12-09 23:09:01.087137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:25.832 [2024-12-09 23:09:01.087150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:25.832 [2024-12-09 23:09:01.087382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:25.832 [2024-12-09 23:09:01.087522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:25.832 [2024-12-09 23:09:01.087537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:25.832 [2024-12-09 23:09:01.087666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.832 pt2 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.832 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.832 "name": "raid_bdev1", 00:27:25.832 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:25.832 "strip_size_kb": 0, 00:27:25.832 "state": "online", 00:27:25.832 "raid_level": "raid1", 00:27:25.832 "superblock": true, 00:27:25.832 "num_base_bdevs": 2, 00:27:25.832 "num_base_bdevs_discovered": 1, 00:27:25.832 "num_base_bdevs_operational": 1, 00:27:25.832 "base_bdevs_list": [ 00:27:25.832 { 00:27:25.832 "name": null, 00:27:25.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.832 "is_configured": false, 00:27:25.832 "data_offset": 256, 00:27:25.833 "data_size": 7936 00:27:25.833 }, 00:27:25.833 { 00:27:25.833 "name": "pt2", 00:27:25.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:25.833 "is_configured": true, 00:27:25.833 "data_offset": 256, 00:27:25.833 "data_size": 7936 00:27:25.833 } 00:27:25.833 ] 00:27:25.833 }' 00:27:25.833 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.833 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.095 [2024-12-09 23:09:01.392655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:26.095 [2024-12-09 23:09:01.392679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:26.095 [2024-12-09 23:09:01.392733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.095 [2024-12-09 23:09:01.392772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.095 [2024-12-09 23:09:01.392785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.095 [2024-12-09 23:09:01.432671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:26.095 [2024-12-09 23:09:01.432717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.095 [2024-12-09 23:09:01.432732] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:26.095 [2024-12-09 23:09:01.432739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.095 [2024-12-09 23:09:01.434574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.095 [2024-12-09 23:09:01.434600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:26.095 [2024-12-09 23:09:01.434660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:26.095 [2024-12-09 23:09:01.434694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:26.095 [2024-12-09 23:09:01.434798] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:26.095 [2024-12-09 23:09:01.434811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:26.095 [2024-12-09 23:09:01.434824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:26.095 [2024-12-09 23:09:01.434865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:26.095 [2024-12-09 23:09:01.434921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:26.095 [2024-12-09 23:09:01.434928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:26.095 [2024-12-09 23:09:01.435141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:26.095 [2024-12-09 23:09:01.435258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:26.095 [2024-12-09 23:09:01.435272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:26.095 [2024-12-09 23:09:01.435383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.095 pt1 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.095 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.356 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.356 "name": "raid_bdev1", 00:27:26.356 "uuid": "ec656b3f-c9f3-40fd-bf31-890c84e1469c", 00:27:26.356 "strip_size_kb": 0, 00:27:26.356 "state": "online", 00:27:26.356 "raid_level": "raid1", 00:27:26.356 "superblock": true, 00:27:26.356 "num_base_bdevs": 2, 00:27:26.356 "num_base_bdevs_discovered": 1, 00:27:26.356 "num_base_bdevs_operational": 1, 00:27:26.356 "base_bdevs_list": [ 00:27:26.356 { 00:27:26.356 "name": null, 00:27:26.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.356 "is_configured": false, 00:27:26.356 "data_offset": 256, 00:27:26.356 "data_size": 7936 00:27:26.356 }, 00:27:26.356 { 00:27:26.356 "name": "pt2", 00:27:26.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:26.356 "is_configured": true, 00:27:26.356 "data_offset": 256, 00:27:26.356 "data_size": 7936 00:27:26.356 } 00:27:26.356 ] 00:27:26.356 }' 00:27:26.356 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.356 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.615 [2024-12-09 23:09:01.784946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' ec656b3f-c9f3-40fd-bf31-890c84e1469c '!=' ec656b3f-c9f3-40fd-bf31-890c84e1469c ']' 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83731 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 83731 ']' 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 83731 00:27:26.615 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83731 00:27:26.616 killing process with pid 83731 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83731' 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 83731 00:27:26.616 [2024-12-09 23:09:01.831683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:26.616 [2024-12-09 23:09:01.831751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.616 23:09:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 83731 00:27:26.616 [2024-12-09 23:09:01.831788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.616 [2024-12-09 23:09:01.831800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:26.616 [2024-12-09 23:09:01.933821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:27.185 ************************************ 00:27:27.185 23:09:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:27:27.185 00:27:27.185 real 0m4.293s 00:27:27.185 user 0m6.550s 00:27:27.185 sys 0m0.742s 00:27:27.185 23:09:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.185 23:09:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:27.185 END TEST raid_superblock_test_4k 00:27:27.185 ************************************ 00:27:27.446 23:09:02 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:27:27.446 23:09:02 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:27:27.446 23:09:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:27.446 23:09:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.446 23:09:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:27.446 ************************************ 00:27:27.446 START TEST raid_rebuild_test_sb_4k 00:27:27.446 ************************************ 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=84043 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 84043 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 84043 ']' 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:27.446 23:09:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:27.446 [2024-12-09 23:09:02.634403] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:27:27.446 [2024-12-09 23:09:02.634531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84043 ] 00:27:27.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:27.446 Zero copy mechanism will not be used. 00:27:27.446 [2024-12-09 23:09:02.789164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.715 [2024-12-09 23:09:02.877547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.715 [2024-12-09 23:09:02.990578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:27.715 [2024-12-09 23:09:02.990607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 BaseBdev1_malloc 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 [2024-12-09 23:09:03.504118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:28.287 [2024-12-09 23:09:03.504174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.287 [2024-12-09 23:09:03.504192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:28.287 [2024-12-09 23:09:03.504201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.287 [2024-12-09 23:09:03.505979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.287 [2024-12-09 23:09:03.506017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:28.287 BaseBdev1 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 BaseBdev2_malloc 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 [2024-12-09 23:09:03.535978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:28.287 [2024-12-09 23:09:03.536025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.287 [2024-12-09 23:09:03.536044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:28.287 [2024-12-09 23:09:03.536053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.287 [2024-12-09 23:09:03.537881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.287 [2024-12-09 23:09:03.537914] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:28.287 BaseBdev2 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 spare_malloc 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 spare_delay 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 [2024-12-09 23:09:03.588373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:28.287 [2024-12-09 23:09:03.588426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.287 [2024-12-09 23:09:03.588441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:28.287 [2024-12-09 23:09:03.588449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.287 [2024-12-09 23:09:03.590247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.287 [2024-12-09 23:09:03.590280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:28.287 spare 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.287 [2024-12-09 23:09:03.596418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:28.287 [2024-12-09 23:09:03.598007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:28.287 [2024-12-09 23:09:03.598169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:28.287 [2024-12-09 23:09:03.598188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:28.287 [2024-12-09 23:09:03.598394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:28.287 [2024-12-09 23:09:03.598527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:28.287 [2024-12-09 23:09:03.598540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:28.287 [2024-12-09 23:09:03.598652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.287 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.288 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.288 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.288 "name": "raid_bdev1", 00:27:28.288 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:28.288 "strip_size_kb": 0, 00:27:28.288 "state": "online", 00:27:28.288 "raid_level": "raid1", 00:27:28.288 "superblock": true, 00:27:28.288 "num_base_bdevs": 2, 00:27:28.288 "num_base_bdevs_discovered": 2, 00:27:28.288 "num_base_bdevs_operational": 2, 00:27:28.288 "base_bdevs_list": [ 00:27:28.288 { 00:27:28.288 "name": "BaseBdev1", 00:27:28.288 "uuid": "ff8563b5-4276-50ec-95c2-a72d89122d48", 00:27:28.288 "is_configured": true, 00:27:28.288 "data_offset": 256, 00:27:28.288 "data_size": 7936 00:27:28.288 }, 00:27:28.288 { 00:27:28.288 "name": "BaseBdev2", 00:27:28.288 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:28.288 "is_configured": true, 00:27:28.288 "data_offset": 256, 00:27:28.288 "data_size": 7936 00:27:28.288 } 00:27:28.288 ] 00:27:28.288 }' 00:27:28.288 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.288 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.859 [2024-12-09 23:09:03.940727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:28.859 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:28.860 23:09:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:28.860 [2024-12-09 23:09:04.184570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:28.860 /dev/nbd0 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:28.860 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:29.120 1+0 records in 00:27:29.120 1+0 records out 00:27:29.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025981 s, 15.8 MB/s 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:29.120 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:29.694 7936+0 records in 00:27:29.694 7936+0 records out 00:27:29.694 32505856 bytes (33 MB, 31 MiB) copied, 0.639371 s, 50.8 MB/s 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:29.694 23:09:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:29.954 [2024-12-09 23:09:05.088726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.954 [2024-12-09 23:09:05.098019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.954 "name": "raid_bdev1", 00:27:29.954 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:29.954 "strip_size_kb": 0, 00:27:29.954 "state": "online", 00:27:29.954 "raid_level": "raid1", 00:27:29.954 "superblock": true, 00:27:29.954 "num_base_bdevs": 2, 00:27:29.954 "num_base_bdevs_discovered": 1, 00:27:29.954 "num_base_bdevs_operational": 1, 00:27:29.954 "base_bdevs_list": [ 00:27:29.954 { 00:27:29.954 "name": null, 00:27:29.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.954 "is_configured": false, 00:27:29.954 "data_offset": 0, 00:27:29.954 "data_size": 7936 00:27:29.954 }, 00:27:29.954 { 00:27:29.954 "name": "BaseBdev2", 00:27:29.954 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:29.954 "is_configured": true, 00:27:29.954 "data_offset": 256, 00:27:29.954 "data_size": 7936 00:27:29.954 } 00:27:29.954 ] 00:27:29.954 }' 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.954 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:30.215 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:30.215 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.215 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:30.215 [2024-12-09 23:09:05.430111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:30.215 [2024-12-09 23:09:05.439730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:27:30.215 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.215 23:09:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:30.215 [2024-12-09 23:09:05.441349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:31.185 "name": "raid_bdev1", 00:27:31.185 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:31.185 "strip_size_kb": 0, 00:27:31.185 "state": "online", 00:27:31.185 "raid_level": "raid1", 00:27:31.185 "superblock": true, 00:27:31.185 "num_base_bdevs": 2, 00:27:31.185 "num_base_bdevs_discovered": 2, 00:27:31.185 "num_base_bdevs_operational": 2, 00:27:31.185 "process": { 00:27:31.185 "type": "rebuild", 00:27:31.185 "target": "spare", 00:27:31.185 "progress": { 00:27:31.185 "blocks": 2560, 00:27:31.185 "percent": 32 00:27:31.185 } 00:27:31.185 }, 00:27:31.185 "base_bdevs_list": [ 00:27:31.185 { 00:27:31.185 "name": "spare", 00:27:31.185 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:31.185 "is_configured": true, 00:27:31.185 "data_offset": 256, 00:27:31.185 "data_size": 7936 00:27:31.185 }, 00:27:31.185 { 00:27:31.185 "name": "BaseBdev2", 00:27:31.185 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:31.185 "is_configured": true, 00:27:31.185 "data_offset": 256, 00:27:31.185 "data_size": 7936 00:27:31.185 } 00:27:31.185 ] 00:27:31.185 }' 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.185 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.467 [2024-12-09 23:09:06.539644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.467 [2024-12-09 23:09:06.546596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:31.467 [2024-12-09 23:09:06.546654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.467 [2024-12-09 23:09:06.546667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.467 [2024-12-09 23:09:06.546674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.467 "name": "raid_bdev1", 00:27:31.467 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:31.467 "strip_size_kb": 0, 00:27:31.467 "state": "online", 00:27:31.467 "raid_level": "raid1", 00:27:31.467 "superblock": true, 00:27:31.467 "num_base_bdevs": 2, 00:27:31.467 "num_base_bdevs_discovered": 1, 00:27:31.467 "num_base_bdevs_operational": 1, 00:27:31.467 "base_bdevs_list": [ 00:27:31.467 { 00:27:31.467 "name": null, 00:27:31.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.467 "is_configured": false, 00:27:31.467 "data_offset": 0, 00:27:31.467 "data_size": 7936 00:27:31.467 }, 00:27:31.467 { 00:27:31.467 "name": "BaseBdev2", 00:27:31.467 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:31.467 "is_configured": true, 00:27:31.467 "data_offset": 256, 00:27:31.467 "data_size": 7936 00:27:31.467 } 00:27:31.467 ] 00:27:31.467 }' 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.467 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:31.728 "name": "raid_bdev1", 00:27:31.728 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:31.728 "strip_size_kb": 0, 00:27:31.728 "state": "online", 00:27:31.728 "raid_level": "raid1", 00:27:31.728 "superblock": true, 00:27:31.728 "num_base_bdevs": 2, 00:27:31.728 "num_base_bdevs_discovered": 1, 00:27:31.728 "num_base_bdevs_operational": 1, 00:27:31.728 "base_bdevs_list": [ 00:27:31.728 { 00:27:31.728 "name": null, 00:27:31.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.728 "is_configured": false, 00:27:31.728 "data_offset": 0, 00:27:31.728 "data_size": 7936 00:27:31.728 }, 00:27:31.728 { 00:27:31.728 "name": "BaseBdev2", 00:27:31.728 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:31.728 "is_configured": true, 00:27:31.728 "data_offset": 256, 00:27:31.728 "data_size": 7936 00:27:31.728 } 00:27:31.728 ] 00:27:31.728 }' 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:31.728 23:09:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:31.728 23:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:31.728 23:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:31.728 23:09:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.728 23:09:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:31.728 [2024-12-09 23:09:07.018258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:31.728 [2024-12-09 23:09:07.027523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:27:31.728 23:09:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.728 23:09:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:31.728 [2024-12-09 23:09:07.029155] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:33.113 "name": "raid_bdev1", 00:27:33.113 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:33.113 "strip_size_kb": 0, 00:27:33.113 "state": "online", 00:27:33.113 "raid_level": "raid1", 00:27:33.113 "superblock": true, 00:27:33.113 "num_base_bdevs": 2, 00:27:33.113 "num_base_bdevs_discovered": 2, 00:27:33.113 "num_base_bdevs_operational": 2, 00:27:33.113 "process": { 00:27:33.113 "type": "rebuild", 00:27:33.113 "target": "spare", 00:27:33.113 "progress": { 00:27:33.113 "blocks": 2560, 00:27:33.113 "percent": 32 00:27:33.113 } 00:27:33.113 }, 00:27:33.113 "base_bdevs_list": [ 00:27:33.113 { 00:27:33.113 "name": "spare", 00:27:33.113 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:33.113 "is_configured": true, 00:27:33.113 "data_offset": 256, 00:27:33.113 "data_size": 7936 00:27:33.113 }, 00:27:33.113 { 00:27:33.113 "name": "BaseBdev2", 00:27:33.113 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:33.113 "is_configured": true, 00:27:33.113 "data_offset": 256, 00:27:33.113 "data_size": 7936 00:27:33.113 } 00:27:33.113 ] 00:27:33.113 }' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:33.113 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=556 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:33.113 "name": "raid_bdev1", 00:27:33.113 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:33.113 "strip_size_kb": 0, 00:27:33.113 "state": "online", 00:27:33.113 "raid_level": "raid1", 00:27:33.113 "superblock": true, 00:27:33.113 "num_base_bdevs": 2, 00:27:33.113 "num_base_bdevs_discovered": 2, 00:27:33.113 "num_base_bdevs_operational": 2, 00:27:33.113 "process": { 00:27:33.113 "type": "rebuild", 00:27:33.113 "target": "spare", 00:27:33.113 "progress": { 00:27:33.113 "blocks": 2816, 00:27:33.113 "percent": 35 00:27:33.113 } 00:27:33.113 }, 00:27:33.113 "base_bdevs_list": [ 00:27:33.113 { 00:27:33.113 "name": "spare", 00:27:33.113 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:33.113 "is_configured": true, 00:27:33.113 "data_offset": 256, 00:27:33.113 "data_size": 7936 00:27:33.113 }, 00:27:33.113 { 00:27:33.113 "name": "BaseBdev2", 00:27:33.113 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:33.113 "is_configured": true, 00:27:33.113 "data_offset": 256, 00:27:33.113 "data_size": 7936 00:27:33.113 } 00:27:33.113 ] 00:27:33.113 }' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:33.113 23:09:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:34.049 "name": "raid_bdev1", 00:27:34.049 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:34.049 "strip_size_kb": 0, 00:27:34.049 "state": "online", 00:27:34.049 "raid_level": "raid1", 00:27:34.049 "superblock": true, 00:27:34.049 "num_base_bdevs": 2, 00:27:34.049 "num_base_bdevs_discovered": 2, 00:27:34.049 "num_base_bdevs_operational": 2, 00:27:34.049 "process": { 00:27:34.049 "type": "rebuild", 00:27:34.049 "target": "spare", 00:27:34.049 "progress": { 00:27:34.049 "blocks": 5632, 00:27:34.049 "percent": 70 00:27:34.049 } 00:27:34.049 }, 00:27:34.049 "base_bdevs_list": [ 00:27:34.049 { 00:27:34.049 "name": "spare", 00:27:34.049 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:34.049 "is_configured": true, 00:27:34.049 "data_offset": 256, 00:27:34.049 "data_size": 7936 00:27:34.049 }, 00:27:34.049 { 00:27:34.049 "name": "BaseBdev2", 00:27:34.049 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:34.049 "is_configured": true, 00:27:34.049 "data_offset": 256, 00:27:34.049 "data_size": 7936 00:27:34.049 } 00:27:34.049 ] 00:27:34.049 }' 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:34.049 23:09:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:35.035 [2024-12-09 23:09:10.143476] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:35.035 [2024-12-09 23:09:10.143578] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:35.035 [2024-12-09 23:09:10.143733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:35.035 "name": "raid_bdev1", 00:27:35.035 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:35.035 "strip_size_kb": 0, 00:27:35.035 "state": "online", 00:27:35.035 "raid_level": "raid1", 00:27:35.035 "superblock": true, 00:27:35.035 "num_base_bdevs": 2, 00:27:35.035 "num_base_bdevs_discovered": 2, 00:27:35.035 "num_base_bdevs_operational": 2, 00:27:35.035 "base_bdevs_list": [ 00:27:35.035 { 00:27:35.035 "name": "spare", 00:27:35.035 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:35.035 "is_configured": true, 00:27:35.035 "data_offset": 256, 00:27:35.035 "data_size": 7936 00:27:35.035 }, 00:27:35.035 { 00:27:35.035 "name": "BaseBdev2", 00:27:35.035 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:35.035 "is_configured": true, 00:27:35.035 "data_offset": 256, 00:27:35.035 "data_size": 7936 00:27:35.035 } 00:27:35.035 ] 00:27:35.035 }' 00:27:35.035 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:35.295 "name": "raid_bdev1", 00:27:35.295 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:35.295 "strip_size_kb": 0, 00:27:35.295 "state": "online", 00:27:35.295 "raid_level": "raid1", 00:27:35.295 "superblock": true, 00:27:35.295 "num_base_bdevs": 2, 00:27:35.295 "num_base_bdevs_discovered": 2, 00:27:35.295 "num_base_bdevs_operational": 2, 00:27:35.295 "base_bdevs_list": [ 00:27:35.295 { 00:27:35.295 "name": "spare", 00:27:35.295 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:35.295 "is_configured": true, 00:27:35.295 "data_offset": 256, 00:27:35.295 "data_size": 7936 00:27:35.295 }, 00:27:35.295 { 00:27:35.295 "name": "BaseBdev2", 00:27:35.295 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:35.295 "is_configured": true, 00:27:35.295 "data_offset": 256, 00:27:35.295 "data_size": 7936 00:27:35.295 } 00:27:35.295 ] 00:27:35.295 }' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.295 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.295 "name": "raid_bdev1", 00:27:35.296 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:35.296 "strip_size_kb": 0, 00:27:35.296 "state": "online", 00:27:35.296 "raid_level": "raid1", 00:27:35.296 "superblock": true, 00:27:35.296 "num_base_bdevs": 2, 00:27:35.296 "num_base_bdevs_discovered": 2, 00:27:35.296 "num_base_bdevs_operational": 2, 00:27:35.296 "base_bdevs_list": [ 00:27:35.296 { 00:27:35.296 "name": "spare", 00:27:35.296 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:35.296 "is_configured": true, 00:27:35.296 "data_offset": 256, 00:27:35.296 "data_size": 7936 00:27:35.296 }, 00:27:35.296 { 00:27:35.296 "name": "BaseBdev2", 00:27:35.296 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:35.296 "is_configured": true, 00:27:35.296 "data_offset": 256, 00:27:35.296 "data_size": 7936 00:27:35.296 } 00:27:35.296 ] 00:27:35.296 }' 00:27:35.296 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.296 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.556 [2024-12-09 23:09:10.878664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:35.556 [2024-12-09 23:09:10.878804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:35.556 [2024-12-09 23:09:10.878875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:35.556 [2024-12-09 23:09:10.878932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:35.556 [2024-12-09 23:09:10.878942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:27:35.556 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:35.816 23:09:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:35.816 /dev/nbd0 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.079 1+0 records in 00:27:36.079 1+0 records out 00:27:36.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248516 s, 16.5 MB/s 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:36.079 /dev/nbd1 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.079 1+0 records in 00:27:36.079 1+0 records out 00:27:36.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251976 s, 16.3 MB/s 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.079 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:27:36.080 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.080 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:36.080 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:36.340 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:36.602 23:09:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.864 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.864 [2024-12-09 23:09:12.026802] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:36.865 [2024-12-09 23:09:12.026848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:36.865 [2024-12-09 23:09:12.026867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:36.865 [2024-12-09 23:09:12.026875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:36.865 [2024-12-09 23:09:12.028762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:36.865 [2024-12-09 23:09:12.028793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:36.865 [2024-12-09 23:09:12.028874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:36.865 [2024-12-09 23:09:12.028913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:36.865 [2024-12-09 23:09:12.029021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:36.865 spare 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.865 [2024-12-09 23:09:12.129109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:36.865 [2024-12-09 23:09:12.129154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:36.865 [2024-12-09 23:09:12.129428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:27:36.865 [2024-12-09 23:09:12.129592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:36.865 [2024-12-09 23:09:12.129602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:36.865 [2024-12-09 23:09:12.129747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.865 "name": "raid_bdev1", 00:27:36.865 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:36.865 "strip_size_kb": 0, 00:27:36.865 "state": "online", 00:27:36.865 "raid_level": "raid1", 00:27:36.865 "superblock": true, 00:27:36.865 "num_base_bdevs": 2, 00:27:36.865 "num_base_bdevs_discovered": 2, 00:27:36.865 "num_base_bdevs_operational": 2, 00:27:36.865 "base_bdevs_list": [ 00:27:36.865 { 00:27:36.865 "name": "spare", 00:27:36.865 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:36.865 "is_configured": true, 00:27:36.865 "data_offset": 256, 00:27:36.865 "data_size": 7936 00:27:36.865 }, 00:27:36.865 { 00:27:36.865 "name": "BaseBdev2", 00:27:36.865 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:36.865 "is_configured": true, 00:27:36.865 "data_offset": 256, 00:27:36.865 "data_size": 7936 00:27:36.865 } 00:27:36.865 ] 00:27:36.865 }' 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.865 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:37.125 "name": "raid_bdev1", 00:27:37.125 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:37.125 "strip_size_kb": 0, 00:27:37.125 "state": "online", 00:27:37.125 "raid_level": "raid1", 00:27:37.125 "superblock": true, 00:27:37.125 "num_base_bdevs": 2, 00:27:37.125 "num_base_bdevs_discovered": 2, 00:27:37.125 "num_base_bdevs_operational": 2, 00:27:37.125 "base_bdevs_list": [ 00:27:37.125 { 00:27:37.125 "name": "spare", 00:27:37.125 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:37.125 "is_configured": true, 00:27:37.125 "data_offset": 256, 00:27:37.125 "data_size": 7936 00:27:37.125 }, 00:27:37.125 { 00:27:37.125 "name": "BaseBdev2", 00:27:37.125 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:37.125 "is_configured": true, 00:27:37.125 "data_offset": 256, 00:27:37.125 "data_size": 7936 00:27:37.125 } 00:27:37.125 ] 00:27:37.125 }' 00:27:37.125 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.386 [2024-12-09 23:09:12.574954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.386 "name": "raid_bdev1", 00:27:37.386 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:37.386 "strip_size_kb": 0, 00:27:37.386 "state": "online", 00:27:37.386 "raid_level": "raid1", 00:27:37.386 "superblock": true, 00:27:37.386 "num_base_bdevs": 2, 00:27:37.386 "num_base_bdevs_discovered": 1, 00:27:37.386 "num_base_bdevs_operational": 1, 00:27:37.386 "base_bdevs_list": [ 00:27:37.386 { 00:27:37.386 "name": null, 00:27:37.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.386 "is_configured": false, 00:27:37.386 "data_offset": 0, 00:27:37.386 "data_size": 7936 00:27:37.386 }, 00:27:37.386 { 00:27:37.386 "name": "BaseBdev2", 00:27:37.386 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:37.386 "is_configured": true, 00:27:37.386 "data_offset": 256, 00:27:37.386 "data_size": 7936 00:27:37.386 } 00:27:37.386 ] 00:27:37.386 }' 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.386 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.647 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:37.647 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.647 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:37.647 [2024-12-09 23:09:12.923029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:37.647 [2024-12-09 23:09:12.923334] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:37.647 [2024-12-09 23:09:12.923356] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:37.647 [2024-12-09 23:09:12.923389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:37.647 [2024-12-09 23:09:12.932195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:27:37.647 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.647 23:09:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:37.647 [2024-12-09 23:09:12.933761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.589 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.850 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:38.850 "name": "raid_bdev1", 00:27:38.850 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:38.850 "strip_size_kb": 0, 00:27:38.850 "state": "online", 00:27:38.850 "raid_level": "raid1", 00:27:38.850 "superblock": true, 00:27:38.850 "num_base_bdevs": 2, 00:27:38.850 "num_base_bdevs_discovered": 2, 00:27:38.850 "num_base_bdevs_operational": 2, 00:27:38.850 "process": { 00:27:38.850 "type": "rebuild", 00:27:38.850 "target": "spare", 00:27:38.850 "progress": { 00:27:38.850 "blocks": 2560, 00:27:38.850 "percent": 32 00:27:38.850 } 00:27:38.850 }, 00:27:38.850 "base_bdevs_list": [ 00:27:38.850 { 00:27:38.850 "name": "spare", 00:27:38.850 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:38.850 "is_configured": true, 00:27:38.850 "data_offset": 256, 00:27:38.850 "data_size": 7936 00:27:38.850 }, 00:27:38.850 { 00:27:38.850 "name": "BaseBdev2", 00:27:38.850 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:38.850 "is_configured": true, 00:27:38.850 "data_offset": 256, 00:27:38.850 "data_size": 7936 00:27:38.850 } 00:27:38.850 ] 00:27:38.850 }' 00:27:38.850 23:09:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 [2024-12-09 23:09:14.039993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.850 [2024-12-09 23:09:14.139216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:38.850 [2024-12-09 23:09:14.139286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.850 [2024-12-09 23:09:14.139299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.850 [2024-12-09 23:09:14.139306] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.850 "name": "raid_bdev1", 00:27:38.850 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:38.850 "strip_size_kb": 0, 00:27:38.850 "state": "online", 00:27:38.850 "raid_level": "raid1", 00:27:38.850 "superblock": true, 00:27:38.850 "num_base_bdevs": 2, 00:27:38.850 "num_base_bdevs_discovered": 1, 00:27:38.850 "num_base_bdevs_operational": 1, 00:27:38.850 "base_bdevs_list": [ 00:27:38.850 { 00:27:38.850 "name": null, 00:27:38.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.850 "is_configured": false, 00:27:38.850 "data_offset": 0, 00:27:38.850 "data_size": 7936 00:27:38.850 }, 00:27:38.850 { 00:27:38.850 "name": "BaseBdev2", 00:27:38.850 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:38.850 "is_configured": true, 00:27:38.850 "data_offset": 256, 00:27:38.850 "data_size": 7936 00:27:38.850 } 00:27:38.850 ] 00:27:38.850 }' 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.850 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:39.111 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:39.111 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.111 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:39.111 [2024-12-09 23:09:14.470040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:39.111 [2024-12-09 23:09:14.470094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.111 [2024-12-09 23:09:14.470120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:39.111 [2024-12-09 23:09:14.470129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.111 [2024-12-09 23:09:14.470500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.111 [2024-12-09 23:09:14.470514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:39.111 [2024-12-09 23:09:14.470586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:39.111 [2024-12-09 23:09:14.470598] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:39.111 [2024-12-09 23:09:14.470606] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:39.111 [2024-12-09 23:09:14.470625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:39.370 [2024-12-09 23:09:14.479831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:27:39.370 spare 00:27:39.370 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.370 23:09:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:39.370 [2024-12-09 23:09:14.481474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.321 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:40.321 "name": "raid_bdev1", 00:27:40.321 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:40.321 "strip_size_kb": 0, 00:27:40.321 "state": "online", 00:27:40.321 "raid_level": "raid1", 00:27:40.321 "superblock": true, 00:27:40.322 "num_base_bdevs": 2, 00:27:40.322 "num_base_bdevs_discovered": 2, 00:27:40.322 "num_base_bdevs_operational": 2, 00:27:40.322 "process": { 00:27:40.322 "type": "rebuild", 00:27:40.322 "target": "spare", 00:27:40.322 "progress": { 00:27:40.322 "blocks": 2560, 00:27:40.322 "percent": 32 00:27:40.322 } 00:27:40.322 }, 00:27:40.322 "base_bdevs_list": [ 00:27:40.322 { 00:27:40.322 "name": "spare", 00:27:40.322 "uuid": "2d73a740-b8ac-52ba-bb2d-695ba8aa6360", 00:27:40.322 "is_configured": true, 00:27:40.322 "data_offset": 256, 00:27:40.322 "data_size": 7936 00:27:40.322 }, 00:27:40.322 { 00:27:40.322 "name": "BaseBdev2", 00:27:40.322 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:40.322 "is_configured": true, 00:27:40.322 "data_offset": 256, 00:27:40.322 "data_size": 7936 00:27:40.322 } 00:27:40.322 ] 00:27:40.322 }' 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 [2024-12-09 23:09:15.583528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.322 [2024-12-09 23:09:15.586728] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:40.322 [2024-12-09 23:09:15.586867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.322 [2024-12-09 23:09:15.586928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.322 [2024-12-09 23:09:15.586948] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.322 "name": "raid_bdev1", 00:27:40.322 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:40.322 "strip_size_kb": 0, 00:27:40.322 "state": "online", 00:27:40.322 "raid_level": "raid1", 00:27:40.322 "superblock": true, 00:27:40.322 "num_base_bdevs": 2, 00:27:40.322 "num_base_bdevs_discovered": 1, 00:27:40.322 "num_base_bdevs_operational": 1, 00:27:40.322 "base_bdevs_list": [ 00:27:40.322 { 00:27:40.322 "name": null, 00:27:40.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.322 "is_configured": false, 00:27:40.322 "data_offset": 0, 00:27:40.322 "data_size": 7936 00:27:40.322 }, 00:27:40.322 { 00:27:40.322 "name": "BaseBdev2", 00:27:40.322 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:40.322 "is_configured": true, 00:27:40.322 "data_offset": 256, 00:27:40.322 "data_size": 7936 00:27:40.322 } 00:27:40.322 ] 00:27:40.322 }' 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.322 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:40.583 "name": "raid_bdev1", 00:27:40.583 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:40.583 "strip_size_kb": 0, 00:27:40.583 "state": "online", 00:27:40.583 "raid_level": "raid1", 00:27:40.583 "superblock": true, 00:27:40.583 "num_base_bdevs": 2, 00:27:40.583 "num_base_bdevs_discovered": 1, 00:27:40.583 "num_base_bdevs_operational": 1, 00:27:40.583 "base_bdevs_list": [ 00:27:40.583 { 00:27:40.583 "name": null, 00:27:40.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.583 "is_configured": false, 00:27:40.583 "data_offset": 0, 00:27:40.583 "data_size": 7936 00:27:40.583 }, 00:27:40.583 { 00:27:40.583 "name": "BaseBdev2", 00:27:40.583 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:40.583 "is_configured": true, 00:27:40.583 "data_offset": 256, 00:27:40.583 "data_size": 7936 00:27:40.583 } 00:27:40.583 ] 00:27:40.583 }' 00:27:40.583 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:40.844 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:40.844 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:40.844 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:40.844 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:40.844 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.844 23:09:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.844 23:09:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.844 23:09:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:40.844 23:09:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.844 23:09:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:40.844 [2024-12-09 23:09:16.009982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:40.844 [2024-12-09 23:09:16.010035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.844 [2024-12-09 23:09:16.010057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:40.844 [2024-12-09 23:09:16.010065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.844 [2024-12-09 23:09:16.010442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.844 [2024-12-09 23:09:16.010455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:40.844 [2024-12-09 23:09:16.010520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:40.844 [2024-12-09 23:09:16.010531] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:40.844 [2024-12-09 23:09:16.010541] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:40.844 [2024-12-09 23:09:16.010549] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:40.844 BaseBdev1 00:27:40.844 23:09:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.844 23:09:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.784 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.784 "name": "raid_bdev1", 00:27:41.784 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:41.784 "strip_size_kb": 0, 00:27:41.784 "state": "online", 00:27:41.785 "raid_level": "raid1", 00:27:41.785 "superblock": true, 00:27:41.785 "num_base_bdevs": 2, 00:27:41.785 "num_base_bdevs_discovered": 1, 00:27:41.785 "num_base_bdevs_operational": 1, 00:27:41.785 "base_bdevs_list": [ 00:27:41.785 { 00:27:41.785 "name": null, 00:27:41.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.785 "is_configured": false, 00:27:41.785 "data_offset": 0, 00:27:41.785 "data_size": 7936 00:27:41.785 }, 00:27:41.785 { 00:27:41.785 "name": "BaseBdev2", 00:27:41.785 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:41.785 "is_configured": true, 00:27:41.785 "data_offset": 256, 00:27:41.785 "data_size": 7936 00:27:41.785 } 00:27:41.785 ] 00:27:41.785 }' 00:27:41.785 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.785 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:42.046 "name": "raid_bdev1", 00:27:42.046 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:42.046 "strip_size_kb": 0, 00:27:42.046 "state": "online", 00:27:42.046 "raid_level": "raid1", 00:27:42.046 "superblock": true, 00:27:42.046 "num_base_bdevs": 2, 00:27:42.046 "num_base_bdevs_discovered": 1, 00:27:42.046 "num_base_bdevs_operational": 1, 00:27:42.046 "base_bdevs_list": [ 00:27:42.046 { 00:27:42.046 "name": null, 00:27:42.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.046 "is_configured": false, 00:27:42.046 "data_offset": 0, 00:27:42.046 "data_size": 7936 00:27:42.046 }, 00:27:42.046 { 00:27:42.046 "name": "BaseBdev2", 00:27:42.046 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:42.046 "is_configured": true, 00:27:42.046 "data_offset": 256, 00:27:42.046 "data_size": 7936 00:27:42.046 } 00:27:42.046 ] 00:27:42.046 }' 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:42.046 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:42.309 [2024-12-09 23:09:17.442299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:42.309 [2024-12-09 23:09:17.442424] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:42.309 [2024-12-09 23:09:17.442436] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:42.309 request: 00:27:42.309 { 00:27:42.309 "base_bdev": "BaseBdev1", 00:27:42.309 "raid_bdev": "raid_bdev1", 00:27:42.309 "method": "bdev_raid_add_base_bdev", 00:27:42.309 "req_id": 1 00:27:42.309 } 00:27:42.309 Got JSON-RPC error response 00:27:42.309 response: 00:27:42.309 { 00:27:42.309 "code": -22, 00:27:42.309 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:42.309 } 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:42.309 23:09:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.253 "name": "raid_bdev1", 00:27:43.253 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:43.253 "strip_size_kb": 0, 00:27:43.253 "state": "online", 00:27:43.253 "raid_level": "raid1", 00:27:43.253 "superblock": true, 00:27:43.253 "num_base_bdevs": 2, 00:27:43.253 "num_base_bdevs_discovered": 1, 00:27:43.253 "num_base_bdevs_operational": 1, 00:27:43.253 "base_bdevs_list": [ 00:27:43.253 { 00:27:43.253 "name": null, 00:27:43.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.253 "is_configured": false, 00:27:43.253 "data_offset": 0, 00:27:43.253 "data_size": 7936 00:27:43.253 }, 00:27:43.253 { 00:27:43.253 "name": "BaseBdev2", 00:27:43.253 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:43.253 "is_configured": true, 00:27:43.253 "data_offset": 256, 00:27:43.253 "data_size": 7936 00:27:43.253 } 00:27:43.253 ] 00:27:43.253 }' 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.253 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.513 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:43.513 "name": "raid_bdev1", 00:27:43.513 "uuid": "96aded8c-1ecc-44de-bb3e-050d856f24f1", 00:27:43.513 "strip_size_kb": 0, 00:27:43.513 "state": "online", 00:27:43.513 "raid_level": "raid1", 00:27:43.513 "superblock": true, 00:27:43.513 "num_base_bdevs": 2, 00:27:43.513 "num_base_bdevs_discovered": 1, 00:27:43.513 "num_base_bdevs_operational": 1, 00:27:43.513 "base_bdevs_list": [ 00:27:43.513 { 00:27:43.514 "name": null, 00:27:43.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.514 "is_configured": false, 00:27:43.514 "data_offset": 0, 00:27:43.514 "data_size": 7936 00:27:43.514 }, 00:27:43.514 { 00:27:43.514 "name": "BaseBdev2", 00:27:43.514 "uuid": "2957fb0a-2dd9-5d13-8dda-ff321973bc0d", 00:27:43.514 "is_configured": true, 00:27:43.514 "data_offset": 256, 00:27:43.514 "data_size": 7936 00:27:43.514 } 00:27:43.514 ] 00:27:43.514 }' 00:27:43.514 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:43.514 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:43.514 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 84043 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 84043 ']' 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 84043 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84043 00:27:43.774 killing process with pid 84043 00:27:43.774 Received shutdown signal, test time was about 60.000000 seconds 00:27:43.774 00:27:43.774 Latency(us) 00:27:43.774 [2024-12-09T23:09:19.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.774 [2024-12-09T23:09:19.137Z] =================================================================================================================== 00:27:43.774 [2024-12-09T23:09:19.137Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84043' 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 84043 00:27:43.774 23:09:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 84043 00:27:43.774 [2024-12-09 23:09:18.907233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:43.774 [2024-12-09 23:09:18.907324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.774 [2024-12-09 23:09:18.907362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.774 [2024-12-09 23:09:18.907401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:43.774 [2024-12-09 23:09:19.054391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:44.408 ************************************ 00:27:44.408 END TEST raid_rebuild_test_sb_4k 00:27:44.408 ************************************ 00:27:44.408 23:09:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:27:44.408 00:27:44.408 real 0m17.067s 00:27:44.408 user 0m21.863s 00:27:44.408 sys 0m1.903s 00:27:44.408 23:09:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.408 23:09:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:44.408 23:09:19 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:27:44.408 23:09:19 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:44.408 23:09:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:44.408 23:09:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.408 23:09:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:44.408 ************************************ 00:27:44.408 START TEST raid_state_function_test_sb_md_separate 00:27:44.408 ************************************ 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:44.408 Process raid pid: 84705 00:27:44.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84705 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84705' 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84705 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84705 ']' 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.408 23:09:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.670 [2024-12-09 23:09:19.770811] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:27:44.670 [2024-12-09 23:09:19.771182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.670 [2024-12-09 23:09:19.946088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.932 [2024-12-09 23:09:20.031248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.932 [2024-12-09 23:09:20.143637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:44.932 [2024-12-09 23:09:20.143672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 [2024-12-09 23:09:20.604549] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:45.503 [2024-12-09 23:09:20.604599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:45.503 [2024-12-09 23:09:20.604607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:45.503 [2024-12-09 23:09:20.604615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.503 "name": "Existed_Raid", 00:27:45.503 "uuid": "e5e5454f-bee2-4ad3-98f4-68a2bd0810eb", 00:27:45.503 "strip_size_kb": 0, 00:27:45.503 "state": "configuring", 00:27:45.503 "raid_level": "raid1", 00:27:45.503 "superblock": true, 00:27:45.503 "num_base_bdevs": 2, 00:27:45.503 "num_base_bdevs_discovered": 0, 00:27:45.503 "num_base_bdevs_operational": 2, 00:27:45.503 "base_bdevs_list": [ 00:27:45.503 { 00:27:45.503 "name": "BaseBdev1", 00:27:45.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.503 "is_configured": false, 00:27:45.503 "data_offset": 0, 00:27:45.503 "data_size": 0 00:27:45.503 }, 00:27:45.503 { 00:27:45.503 "name": "BaseBdev2", 00:27:45.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.503 "is_configured": false, 00:27:45.503 "data_offset": 0, 00:27:45.503 "data_size": 0 00:27:45.503 } 00:27:45.503 ] 00:27:45.503 }' 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.503 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.764 [2024-12-09 23:09:20.936555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:45.764 [2024-12-09 23:09:20.936583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.764 [2024-12-09 23:09:20.944546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:45.764 [2024-12-09 23:09:20.944580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:45.764 [2024-12-09 23:09:20.944588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:45.764 [2024-12-09 23:09:20.944596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.764 [2024-12-09 23:09:20.973030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:45.764 BaseBdev1 00:27:45.764 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.765 [ 00:27:45.765 { 00:27:45.765 "name": "BaseBdev1", 00:27:45.765 "aliases": [ 00:27:45.765 "925a0c7f-7c80-4c1f-a7c2-893838ee661d" 00:27:45.765 ], 00:27:45.765 "product_name": "Malloc disk", 00:27:45.765 "block_size": 4096, 00:27:45.765 "num_blocks": 8192, 00:27:45.765 "uuid": "925a0c7f-7c80-4c1f-a7c2-893838ee661d", 00:27:45.765 "md_size": 32, 00:27:45.765 "md_interleave": false, 00:27:45.765 "dif_type": 0, 00:27:45.765 "assigned_rate_limits": { 00:27:45.765 "rw_ios_per_sec": 0, 00:27:45.765 "rw_mbytes_per_sec": 0, 00:27:45.765 "r_mbytes_per_sec": 0, 00:27:45.765 "w_mbytes_per_sec": 0 00:27:45.765 }, 00:27:45.765 "claimed": true, 00:27:45.765 "claim_type": "exclusive_write", 00:27:45.765 "zoned": false, 00:27:45.765 "supported_io_types": { 00:27:45.765 "read": true, 00:27:45.765 "write": true, 00:27:45.765 "unmap": true, 00:27:45.765 "flush": true, 00:27:45.765 "reset": true, 00:27:45.765 "nvme_admin": false, 00:27:45.765 "nvme_io": false, 00:27:45.765 "nvme_io_md": false, 00:27:45.765 "write_zeroes": true, 00:27:45.765 "zcopy": true, 00:27:45.765 "get_zone_info": false, 00:27:45.765 "zone_management": false, 00:27:45.765 "zone_append": false, 00:27:45.765 "compare": false, 00:27:45.765 "compare_and_write": false, 00:27:45.765 "abort": true, 00:27:45.765 "seek_hole": false, 00:27:45.765 "seek_data": false, 00:27:45.765 "copy": true, 00:27:45.765 "nvme_iov_md": false 00:27:45.765 }, 00:27:45.765 "memory_domains": [ 00:27:45.765 { 00:27:45.765 "dma_device_id": "system", 00:27:45.765 "dma_device_type": 1 00:27:45.765 }, 00:27:45.765 { 00:27:45.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.765 "dma_device_type": 2 00:27:45.765 } 00:27:45.765 ], 00:27:45.765 "driver_specific": {} 00:27:45.765 } 00:27:45.765 ] 00:27:45.765 23:09:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.765 "name": "Existed_Raid", 00:27:45.765 "uuid": "6d0aefa2-445d-445c-a755-cab8ce59a0ed", 00:27:45.765 "strip_size_kb": 0, 00:27:45.765 "state": "configuring", 00:27:45.765 "raid_level": "raid1", 00:27:45.765 "superblock": true, 00:27:45.765 "num_base_bdevs": 2, 00:27:45.765 "num_base_bdevs_discovered": 1, 00:27:45.765 "num_base_bdevs_operational": 2, 00:27:45.765 "base_bdevs_list": [ 00:27:45.765 { 00:27:45.765 "name": "BaseBdev1", 00:27:45.765 "uuid": "925a0c7f-7c80-4c1f-a7c2-893838ee661d", 00:27:45.765 "is_configured": true, 00:27:45.765 "data_offset": 256, 00:27:45.765 "data_size": 7936 00:27:45.765 }, 00:27:45.765 { 00:27:45.765 "name": "BaseBdev2", 00:27:45.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.765 "is_configured": false, 00:27:45.765 "data_offset": 0, 00:27:45.765 "data_size": 0 00:27:45.765 } 00:27:45.765 ] 00:27:45.765 }' 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.765 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.337 [2024-12-09 23:09:21.409184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:46.337 [2024-12-09 23:09:21.409224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.337 [2024-12-09 23:09:21.417212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:46.337 [2024-12-09 23:09:21.418709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:46.337 [2024-12-09 23:09:21.418742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.337 "name": "Existed_Raid", 00:27:46.337 "uuid": "30fceb38-3431-40dd-bcbe-3bb94ac01a6f", 00:27:46.337 "strip_size_kb": 0, 00:27:46.337 "state": "configuring", 00:27:46.337 "raid_level": "raid1", 00:27:46.337 "superblock": true, 00:27:46.337 "num_base_bdevs": 2, 00:27:46.337 "num_base_bdevs_discovered": 1, 00:27:46.337 "num_base_bdevs_operational": 2, 00:27:46.337 "base_bdevs_list": [ 00:27:46.337 { 00:27:46.337 "name": "BaseBdev1", 00:27:46.337 "uuid": "925a0c7f-7c80-4c1f-a7c2-893838ee661d", 00:27:46.337 "is_configured": true, 00:27:46.337 "data_offset": 256, 00:27:46.337 "data_size": 7936 00:27:46.337 }, 00:27:46.337 { 00:27:46.337 "name": "BaseBdev2", 00:27:46.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.337 "is_configured": false, 00:27:46.337 "data_offset": 0, 00:27:46.337 "data_size": 0 00:27:46.337 } 00:27:46.337 ] 00:27:46.337 }' 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.337 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.598 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:46.598 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.598 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.598 [2024-12-09 23:09:21.783011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:46.598 [2024-12-09 23:09:21.783199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:46.599 [2024-12-09 23:09:21.783213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:46.599 [2024-12-09 23:09:21.783275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:46.599 [2024-12-09 23:09:21.783366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:46.599 [2024-12-09 23:09:21.783374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:46.599 [2024-12-09 23:09:21.783440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.599 BaseBdev2 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.599 [ 00:27:46.599 { 00:27:46.599 "name": "BaseBdev2", 00:27:46.599 "aliases": [ 00:27:46.599 "2f6e26af-b124-4b0f-986e-1c6678c66d54" 00:27:46.599 ], 00:27:46.599 "product_name": "Malloc disk", 00:27:46.599 "block_size": 4096, 00:27:46.599 "num_blocks": 8192, 00:27:46.599 "uuid": "2f6e26af-b124-4b0f-986e-1c6678c66d54", 00:27:46.599 "md_size": 32, 00:27:46.599 "md_interleave": false, 00:27:46.599 "dif_type": 0, 00:27:46.599 "assigned_rate_limits": { 00:27:46.599 "rw_ios_per_sec": 0, 00:27:46.599 "rw_mbytes_per_sec": 0, 00:27:46.599 "r_mbytes_per_sec": 0, 00:27:46.599 "w_mbytes_per_sec": 0 00:27:46.599 }, 00:27:46.599 "claimed": true, 00:27:46.599 "claim_type": "exclusive_write", 00:27:46.599 "zoned": false, 00:27:46.599 "supported_io_types": { 00:27:46.599 "read": true, 00:27:46.599 "write": true, 00:27:46.599 "unmap": true, 00:27:46.599 "flush": true, 00:27:46.599 "reset": true, 00:27:46.599 "nvme_admin": false, 00:27:46.599 "nvme_io": false, 00:27:46.599 "nvme_io_md": false, 00:27:46.599 "write_zeroes": true, 00:27:46.599 "zcopy": true, 00:27:46.599 "get_zone_info": false, 00:27:46.599 "zone_management": false, 00:27:46.599 "zone_append": false, 00:27:46.599 "compare": false, 00:27:46.599 "compare_and_write": false, 00:27:46.599 "abort": true, 00:27:46.599 "seek_hole": false, 00:27:46.599 "seek_data": false, 00:27:46.599 "copy": true, 00:27:46.599 "nvme_iov_md": false 00:27:46.599 }, 00:27:46.599 "memory_domains": [ 00:27:46.599 { 00:27:46.599 "dma_device_id": "system", 00:27:46.599 "dma_device_type": 1 00:27:46.599 }, 00:27:46.599 { 00:27:46.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.599 "dma_device_type": 2 00:27:46.599 } 00:27:46.599 ], 00:27:46.599 "driver_specific": {} 00:27:46.599 } 00:27:46.599 ] 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.599 "name": "Existed_Raid", 00:27:46.599 "uuid": "30fceb38-3431-40dd-bcbe-3bb94ac01a6f", 00:27:46.599 "strip_size_kb": 0, 00:27:46.599 "state": "online", 00:27:46.599 "raid_level": "raid1", 00:27:46.599 "superblock": true, 00:27:46.599 "num_base_bdevs": 2, 00:27:46.599 "num_base_bdevs_discovered": 2, 00:27:46.599 "num_base_bdevs_operational": 2, 00:27:46.599 "base_bdevs_list": [ 00:27:46.599 { 00:27:46.599 "name": "BaseBdev1", 00:27:46.599 "uuid": "925a0c7f-7c80-4c1f-a7c2-893838ee661d", 00:27:46.599 "is_configured": true, 00:27:46.599 "data_offset": 256, 00:27:46.599 "data_size": 7936 00:27:46.599 }, 00:27:46.599 { 00:27:46.599 "name": "BaseBdev2", 00:27:46.599 "uuid": "2f6e26af-b124-4b0f-986e-1c6678c66d54", 00:27:46.599 "is_configured": true, 00:27:46.599 "data_offset": 256, 00:27:46.599 "data_size": 7936 00:27:46.599 } 00:27:46.599 ] 00:27:46.599 }' 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.599 23:09:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.858 [2024-12-09 23:09:22.107382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.858 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.858 "name": "Existed_Raid", 00:27:46.858 "aliases": [ 00:27:46.858 "30fceb38-3431-40dd-bcbe-3bb94ac01a6f" 00:27:46.858 ], 00:27:46.858 "product_name": "Raid Volume", 00:27:46.858 "block_size": 4096, 00:27:46.858 "num_blocks": 7936, 00:27:46.858 "uuid": "30fceb38-3431-40dd-bcbe-3bb94ac01a6f", 00:27:46.858 "md_size": 32, 00:27:46.858 "md_interleave": false, 00:27:46.858 "dif_type": 0, 00:27:46.858 "assigned_rate_limits": { 00:27:46.858 "rw_ios_per_sec": 0, 00:27:46.858 "rw_mbytes_per_sec": 0, 00:27:46.858 "r_mbytes_per_sec": 0, 00:27:46.858 "w_mbytes_per_sec": 0 00:27:46.858 }, 00:27:46.858 "claimed": false, 00:27:46.858 "zoned": false, 00:27:46.858 "supported_io_types": { 00:27:46.858 "read": true, 00:27:46.858 "write": true, 00:27:46.858 "unmap": false, 00:27:46.858 "flush": false, 00:27:46.858 "reset": true, 00:27:46.858 "nvme_admin": false, 00:27:46.858 "nvme_io": false, 00:27:46.858 "nvme_io_md": false, 00:27:46.858 "write_zeroes": true, 00:27:46.858 "zcopy": false, 00:27:46.858 "get_zone_info": false, 00:27:46.858 "zone_management": false, 00:27:46.858 "zone_append": false, 00:27:46.858 "compare": false, 00:27:46.858 "compare_and_write": false, 00:27:46.858 "abort": false, 00:27:46.858 "seek_hole": false, 00:27:46.858 "seek_data": false, 00:27:46.858 "copy": false, 00:27:46.858 "nvme_iov_md": false 00:27:46.858 }, 00:27:46.858 "memory_domains": [ 00:27:46.858 { 00:27:46.859 "dma_device_id": "system", 00:27:46.859 "dma_device_type": 1 00:27:46.859 }, 00:27:46.859 { 00:27:46.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.859 "dma_device_type": 2 00:27:46.859 }, 00:27:46.859 { 00:27:46.859 "dma_device_id": "system", 00:27:46.859 "dma_device_type": 1 00:27:46.859 }, 00:27:46.859 { 00:27:46.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.859 "dma_device_type": 2 00:27:46.859 } 00:27:46.859 ], 00:27:46.859 "driver_specific": { 00:27:46.859 "raid": { 00:27:46.859 "uuid": "30fceb38-3431-40dd-bcbe-3bb94ac01a6f", 00:27:46.859 "strip_size_kb": 0, 00:27:46.859 "state": "online", 00:27:46.859 "raid_level": "raid1", 00:27:46.859 "superblock": true, 00:27:46.859 "num_base_bdevs": 2, 00:27:46.859 "num_base_bdevs_discovered": 2, 00:27:46.859 "num_base_bdevs_operational": 2, 00:27:46.859 "base_bdevs_list": [ 00:27:46.859 { 00:27:46.859 "name": "BaseBdev1", 00:27:46.859 "uuid": "925a0c7f-7c80-4c1f-a7c2-893838ee661d", 00:27:46.859 "is_configured": true, 00:27:46.859 "data_offset": 256, 00:27:46.859 "data_size": 7936 00:27:46.859 }, 00:27:46.859 { 00:27:46.859 "name": "BaseBdev2", 00:27:46.859 "uuid": "2f6e26af-b124-4b0f-986e-1c6678c66d54", 00:27:46.859 "is_configured": true, 00:27:46.859 "data_offset": 256, 00:27:46.859 "data_size": 7936 00:27:46.859 } 00:27:46.859 ] 00:27:46.859 } 00:27:46.859 } 00:27:46.859 }' 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:46.859 BaseBdev2' 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.859 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.118 [2024-12-09 23:09:22.263192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:47.118 "name": "Existed_Raid", 00:27:47.118 "uuid": "30fceb38-3431-40dd-bcbe-3bb94ac01a6f", 00:27:47.118 "strip_size_kb": 0, 00:27:47.118 "state": "online", 00:27:47.118 "raid_level": "raid1", 00:27:47.118 "superblock": true, 00:27:47.118 "num_base_bdevs": 2, 00:27:47.118 "num_base_bdevs_discovered": 1, 00:27:47.118 "num_base_bdevs_operational": 1, 00:27:47.118 "base_bdevs_list": [ 00:27:47.118 { 00:27:47.118 "name": null, 00:27:47.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.118 "is_configured": false, 00:27:47.118 "data_offset": 0, 00:27:47.118 "data_size": 7936 00:27:47.118 }, 00:27:47.118 { 00:27:47.118 "name": "BaseBdev2", 00:27:47.118 "uuid": "2f6e26af-b124-4b0f-986e-1c6678c66d54", 00:27:47.118 "is_configured": true, 00:27:47.118 "data_offset": 256, 00:27:47.118 "data_size": 7936 00:27:47.118 } 00:27:47.118 ] 00:27:47.118 }' 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:47.118 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.378 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.379 [2024-12-09 23:09:22.650304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:47.379 [2024-12-09 23:09:22.650387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:47.379 [2024-12-09 23:09:22.702141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.379 [2024-12-09 23:09:22.702181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.379 [2024-12-09 23:09:22.702191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84705 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84705 ']' 00:27:47.379 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 84705 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84705 00:27:47.640 killing process with pid 84705 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84705' 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 84705 00:27:47.640 [2024-12-09 23:09:22.763519] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.640 23:09:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 84705 00:27:47.640 [2024-12-09 23:09:22.772059] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:48.212 ************************************ 00:27:48.212 END TEST raid_state_function_test_sb_md_separate 00:27:48.212 ************************************ 00:27:48.212 23:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:27:48.212 00:27:48.212 real 0m3.673s 00:27:48.212 user 0m5.355s 00:27:48.212 sys 0m0.611s 00:27:48.212 23:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.212 23:09:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 23:09:23 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:48.212 23:09:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.212 23:09:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.212 23:09:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 ************************************ 00:27:48.212 START TEST raid_superblock_test_md_separate 00:27:48.212 ************************************ 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:48.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=84941 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 84941 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84941 ']' 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.212 23:09:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 [2024-12-09 23:09:23.472659] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:27:48.212 [2024-12-09 23:09:23.472839] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84941 ] 00:27:48.472 [2024-12-09 23:09:23.646830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.472 [2024-12-09 23:09:23.730540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.733 [2024-12-09 23:09:23.841581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.733 [2024-12-09 23:09:23.841612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.993 malloc1 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.993 [2024-12-09 23:09:24.335656] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:48.993 [2024-12-09 23:09:24.335705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.993 [2024-12-09 23:09:24.335724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:48.993 [2024-12-09 23:09:24.335732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.993 [2024-12-09 23:09:24.337363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.993 [2024-12-09 23:09:24.337505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:48.993 pt1 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.993 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.255 malloc2 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.255 [2024-12-09 23:09:24.371996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:49.255 [2024-12-09 23:09:24.372044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:49.255 [2024-12-09 23:09:24.372061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:49.255 [2024-12-09 23:09:24.372068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:49.255 [2024-12-09 23:09:24.373659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:49.255 [2024-12-09 23:09:24.373686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:49.255 pt2 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.255 [2024-12-09 23:09:24.380027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:49.255 [2024-12-09 23:09:24.381809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:49.255 [2024-12-09 23:09:24.381960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:49.255 [2024-12-09 23:09:24.381972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:49.255 [2024-12-09 23:09:24.382046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:49.255 [2024-12-09 23:09:24.382181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:49.255 [2024-12-09 23:09:24.382197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:49.255 [2024-12-09 23:09:24.382286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.255 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.256 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.256 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.256 "name": "raid_bdev1", 00:27:49.256 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:49.256 "strip_size_kb": 0, 00:27:49.256 "state": "online", 00:27:49.256 "raid_level": "raid1", 00:27:49.256 "superblock": true, 00:27:49.256 "num_base_bdevs": 2, 00:27:49.256 "num_base_bdevs_discovered": 2, 00:27:49.256 "num_base_bdevs_operational": 2, 00:27:49.256 "base_bdevs_list": [ 00:27:49.256 { 00:27:49.256 "name": "pt1", 00:27:49.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.256 "is_configured": true, 00:27:49.256 "data_offset": 256, 00:27:49.256 "data_size": 7936 00:27:49.256 }, 00:27:49.256 { 00:27:49.256 "name": "pt2", 00:27:49.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:49.256 "is_configured": true, 00:27:49.256 "data_offset": 256, 00:27:49.256 "data_size": 7936 00:27:49.256 } 00:27:49.256 ] 00:27:49.256 }' 00:27:49.256 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.256 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:49.517 [2024-12-09 23:09:24.712329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.517 "name": "raid_bdev1", 00:27:49.517 "aliases": [ 00:27:49.517 "f18828cb-078c-4f1a-8cef-712a68d902dd" 00:27:49.517 ], 00:27:49.517 "product_name": "Raid Volume", 00:27:49.517 "block_size": 4096, 00:27:49.517 "num_blocks": 7936, 00:27:49.517 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:49.517 "md_size": 32, 00:27:49.517 "md_interleave": false, 00:27:49.517 "dif_type": 0, 00:27:49.517 "assigned_rate_limits": { 00:27:49.517 "rw_ios_per_sec": 0, 00:27:49.517 "rw_mbytes_per_sec": 0, 00:27:49.517 "r_mbytes_per_sec": 0, 00:27:49.517 "w_mbytes_per_sec": 0 00:27:49.517 }, 00:27:49.517 "claimed": false, 00:27:49.517 "zoned": false, 00:27:49.517 "supported_io_types": { 00:27:49.517 "read": true, 00:27:49.517 "write": true, 00:27:49.517 "unmap": false, 00:27:49.517 "flush": false, 00:27:49.517 "reset": true, 00:27:49.517 "nvme_admin": false, 00:27:49.517 "nvme_io": false, 00:27:49.517 "nvme_io_md": false, 00:27:49.517 "write_zeroes": true, 00:27:49.517 "zcopy": false, 00:27:49.517 "get_zone_info": false, 00:27:49.517 "zone_management": false, 00:27:49.517 "zone_append": false, 00:27:49.517 "compare": false, 00:27:49.517 "compare_and_write": false, 00:27:49.517 "abort": false, 00:27:49.517 "seek_hole": false, 00:27:49.517 "seek_data": false, 00:27:49.517 "copy": false, 00:27:49.517 "nvme_iov_md": false 00:27:49.517 }, 00:27:49.517 "memory_domains": [ 00:27:49.517 { 00:27:49.517 "dma_device_id": "system", 00:27:49.517 "dma_device_type": 1 00:27:49.517 }, 00:27:49.517 { 00:27:49.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.517 "dma_device_type": 2 00:27:49.517 }, 00:27:49.517 { 00:27:49.517 "dma_device_id": "system", 00:27:49.517 "dma_device_type": 1 00:27:49.517 }, 00:27:49.517 { 00:27:49.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.517 "dma_device_type": 2 00:27:49.517 } 00:27:49.517 ], 00:27:49.517 "driver_specific": { 00:27:49.517 "raid": { 00:27:49.517 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:49.517 "strip_size_kb": 0, 00:27:49.517 "state": "online", 00:27:49.517 "raid_level": "raid1", 00:27:49.517 "superblock": true, 00:27:49.517 "num_base_bdevs": 2, 00:27:49.517 "num_base_bdevs_discovered": 2, 00:27:49.517 "num_base_bdevs_operational": 2, 00:27:49.517 "base_bdevs_list": [ 00:27:49.517 { 00:27:49.517 "name": "pt1", 00:27:49.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.517 "is_configured": true, 00:27:49.517 "data_offset": 256, 00:27:49.517 "data_size": 7936 00:27:49.517 }, 00:27:49.517 { 00:27:49.517 "name": "pt2", 00:27:49.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:49.517 "is_configured": true, 00:27:49.517 "data_offset": 256, 00:27:49.517 "data_size": 7936 00:27:49.517 } 00:27:49.517 ] 00:27:49.517 } 00:27:49.517 } 00:27:49.517 }' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:49.517 pt2' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.517 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:49.518 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.518 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.518 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.518 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.779 [2024-12-09 23:09:24.892332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f18828cb-078c-4f1a-8cef-712a68d902dd 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z f18828cb-078c-4f1a-8cef-712a68d902dd ']' 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.779 [2024-12-09 23:09:24.916082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:49.779 [2024-12-09 23:09:24.916187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:49.779 [2024-12-09 23:09:24.916265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:49.779 [2024-12-09 23:09:24.916320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:49.779 [2024-12-09 23:09:24.916329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:49.779 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:49.780 23:09:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.780 [2024-12-09 23:09:25.008126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:49.780 [2024-12-09 23:09:25.009679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:49.780 [2024-12-09 23:09:25.009738] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:49.780 [2024-12-09 23:09:25.009781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:49.780 [2024-12-09 23:09:25.009793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:49.780 [2024-12-09 23:09:25.009802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:49.780 request: 00:27:49.780 { 00:27:49.780 "name": "raid_bdev1", 00:27:49.780 "raid_level": "raid1", 00:27:49.780 "base_bdevs": [ 00:27:49.780 "malloc1", 00:27:49.780 "malloc2" 00:27:49.780 ], 00:27:49.780 "superblock": false, 00:27:49.780 "method": "bdev_raid_create", 00:27:49.780 "req_id": 1 00:27:49.780 } 00:27:49.780 Got JSON-RPC error response 00:27:49.780 response: 00:27:49.780 { 00:27:49.780 "code": -17, 00:27:49.780 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:49.780 } 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.780 [2024-12-09 23:09:25.052132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:49.780 [2024-12-09 23:09:25.052181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:49.780 [2024-12-09 23:09:25.052195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:49.780 [2024-12-09 23:09:25.052204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:49.780 [2024-12-09 23:09:25.053844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:49.780 [2024-12-09 23:09:25.053875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:49.780 [2024-12-09 23:09:25.053914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:49.780 [2024-12-09 23:09:25.053956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:49.780 pt1 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.780 "name": "raid_bdev1", 00:27:49.780 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:49.780 "strip_size_kb": 0, 00:27:49.780 "state": "configuring", 00:27:49.780 "raid_level": "raid1", 00:27:49.780 "superblock": true, 00:27:49.780 "num_base_bdevs": 2, 00:27:49.780 "num_base_bdevs_discovered": 1, 00:27:49.780 "num_base_bdevs_operational": 2, 00:27:49.780 "base_bdevs_list": [ 00:27:49.780 { 00:27:49.780 "name": "pt1", 00:27:49.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.780 "is_configured": true, 00:27:49.780 "data_offset": 256, 00:27:49.780 "data_size": 7936 00:27:49.780 }, 00:27:49.780 { 00:27:49.780 "name": null, 00:27:49.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:49.780 "is_configured": false, 00:27:49.780 "data_offset": 256, 00:27:49.780 "data_size": 7936 00:27:49.780 } 00:27:49.780 ] 00:27:49.780 }' 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.780 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.041 [2024-12-09 23:09:25.364187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:50.041 [2024-12-09 23:09:25.364250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:50.041 [2024-12-09 23:09:25.364266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:50.041 [2024-12-09 23:09:25.364275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:50.041 [2024-12-09 23:09:25.364451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:50.041 [2024-12-09 23:09:25.364466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:50.041 [2024-12-09 23:09:25.364504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:50.041 [2024-12-09 23:09:25.364528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:50.041 [2024-12-09 23:09:25.364613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:50.041 [2024-12-09 23:09:25.364622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:50.041 [2024-12-09 23:09:25.364677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:50.041 [2024-12-09 23:09:25.364757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:50.041 [2024-12-09 23:09:25.364763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:50.041 [2024-12-09 23:09:25.364832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.041 pt2 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.041 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.339 "name": "raid_bdev1", 00:27:50.339 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:50.339 "strip_size_kb": 0, 00:27:50.339 "state": "online", 00:27:50.339 "raid_level": "raid1", 00:27:50.339 "superblock": true, 00:27:50.339 "num_base_bdevs": 2, 00:27:50.339 "num_base_bdevs_discovered": 2, 00:27:50.339 "num_base_bdevs_operational": 2, 00:27:50.339 "base_bdevs_list": [ 00:27:50.339 { 00:27:50.339 "name": "pt1", 00:27:50.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:50.339 "is_configured": true, 00:27:50.339 "data_offset": 256, 00:27:50.339 "data_size": 7936 00:27:50.339 }, 00:27:50.339 { 00:27:50.339 "name": "pt2", 00:27:50.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:50.339 "is_configured": true, 00:27:50.339 "data_offset": 256, 00:27:50.339 "data_size": 7936 00:27:50.339 } 00:27:50.339 ] 00:27:50.339 }' 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:50.339 [2024-12-09 23:09:25.676490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:50.339 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.599 "name": "raid_bdev1", 00:27:50.599 "aliases": [ 00:27:50.599 "f18828cb-078c-4f1a-8cef-712a68d902dd" 00:27:50.599 ], 00:27:50.599 "product_name": "Raid Volume", 00:27:50.599 "block_size": 4096, 00:27:50.599 "num_blocks": 7936, 00:27:50.599 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:50.599 "md_size": 32, 00:27:50.599 "md_interleave": false, 00:27:50.599 "dif_type": 0, 00:27:50.599 "assigned_rate_limits": { 00:27:50.599 "rw_ios_per_sec": 0, 00:27:50.599 "rw_mbytes_per_sec": 0, 00:27:50.599 "r_mbytes_per_sec": 0, 00:27:50.599 "w_mbytes_per_sec": 0 00:27:50.599 }, 00:27:50.599 "claimed": false, 00:27:50.599 "zoned": false, 00:27:50.599 "supported_io_types": { 00:27:50.599 "read": true, 00:27:50.599 "write": true, 00:27:50.599 "unmap": false, 00:27:50.599 "flush": false, 00:27:50.599 "reset": true, 00:27:50.599 "nvme_admin": false, 00:27:50.599 "nvme_io": false, 00:27:50.599 "nvme_io_md": false, 00:27:50.599 "write_zeroes": true, 00:27:50.599 "zcopy": false, 00:27:50.599 "get_zone_info": false, 00:27:50.599 "zone_management": false, 00:27:50.599 "zone_append": false, 00:27:50.599 "compare": false, 00:27:50.599 "compare_and_write": false, 00:27:50.599 "abort": false, 00:27:50.599 "seek_hole": false, 00:27:50.599 "seek_data": false, 00:27:50.599 "copy": false, 00:27:50.599 "nvme_iov_md": false 00:27:50.599 }, 00:27:50.599 "memory_domains": [ 00:27:50.599 { 00:27:50.599 "dma_device_id": "system", 00:27:50.599 "dma_device_type": 1 00:27:50.599 }, 00:27:50.599 { 00:27:50.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.599 "dma_device_type": 2 00:27:50.599 }, 00:27:50.599 { 00:27:50.599 "dma_device_id": "system", 00:27:50.599 "dma_device_type": 1 00:27:50.599 }, 00:27:50.599 { 00:27:50.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.599 "dma_device_type": 2 00:27:50.599 } 00:27:50.599 ], 00:27:50.599 "driver_specific": { 00:27:50.599 "raid": { 00:27:50.599 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:50.599 "strip_size_kb": 0, 00:27:50.599 "state": "online", 00:27:50.599 "raid_level": "raid1", 00:27:50.599 "superblock": true, 00:27:50.599 "num_base_bdevs": 2, 00:27:50.599 "num_base_bdevs_discovered": 2, 00:27:50.599 "num_base_bdevs_operational": 2, 00:27:50.599 "base_bdevs_list": [ 00:27:50.599 { 00:27:50.599 "name": "pt1", 00:27:50.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:50.599 "is_configured": true, 00:27:50.599 "data_offset": 256, 00:27:50.599 "data_size": 7936 00:27:50.599 }, 00:27:50.599 { 00:27:50.599 "name": "pt2", 00:27:50.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:50.599 "is_configured": true, 00:27:50.599 "data_offset": 256, 00:27:50.599 "data_size": 7936 00:27:50.599 } 00:27:50.599 ] 00:27:50.599 } 00:27:50.599 } 00:27:50.599 }' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:50.599 pt2' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.599 [2024-12-09 23:09:25.844521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' f18828cb-078c-4f1a-8cef-712a68d902dd '!=' f18828cb-078c-4f1a-8cef-712a68d902dd ']' 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.599 [2024-12-09 23:09:25.864307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:50.599 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.600 "name": "raid_bdev1", 00:27:50.600 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:50.600 "strip_size_kb": 0, 00:27:50.600 "state": "online", 00:27:50.600 "raid_level": "raid1", 00:27:50.600 "superblock": true, 00:27:50.600 "num_base_bdevs": 2, 00:27:50.600 "num_base_bdevs_discovered": 1, 00:27:50.600 "num_base_bdevs_operational": 1, 00:27:50.600 "base_bdevs_list": [ 00:27:50.600 { 00:27:50.600 "name": null, 00:27:50.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.600 "is_configured": false, 00:27:50.600 "data_offset": 0, 00:27:50.600 "data_size": 7936 00:27:50.600 }, 00:27:50.600 { 00:27:50.600 "name": "pt2", 00:27:50.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:50.600 "is_configured": true, 00:27:50.600 "data_offset": 256, 00:27:50.600 "data_size": 7936 00:27:50.600 } 00:27:50.600 ] 00:27:50.600 }' 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.600 23:09:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.860 [2024-12-09 23:09:26.172335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:50.860 [2024-12-09 23:09:26.172359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:50.860 [2024-12-09 23:09:26.172415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.860 [2024-12-09 23:09:26.172451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:50.860 [2024-12-09 23:09:26.172460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.860 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.120 [2024-12-09 23:09:26.220341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:51.120 [2024-12-09 23:09:26.220382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.120 [2024-12-09 23:09:26.220395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:51.120 [2024-12-09 23:09:26.220404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.120 [2024-12-09 23:09:26.222031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.120 [2024-12-09 23:09:26.222060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:51.120 [2024-12-09 23:09:26.222110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:51.120 [2024-12-09 23:09:26.222146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:51.120 [2024-12-09 23:09:26.222215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:51.120 [2024-12-09 23:09:26.222224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:51.120 [2024-12-09 23:09:26.222279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:51.120 [2024-12-09 23:09:26.222358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:51.120 [2024-12-09 23:09:26.222364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:51.120 [2024-12-09 23:09:26.222433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.120 pt2 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.120 "name": "raid_bdev1", 00:27:51.120 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:51.120 "strip_size_kb": 0, 00:27:51.120 "state": "online", 00:27:51.120 "raid_level": "raid1", 00:27:51.120 "superblock": true, 00:27:51.120 "num_base_bdevs": 2, 00:27:51.120 "num_base_bdevs_discovered": 1, 00:27:51.120 "num_base_bdevs_operational": 1, 00:27:51.120 "base_bdevs_list": [ 00:27:51.120 { 00:27:51.120 "name": null, 00:27:51.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.120 "is_configured": false, 00:27:51.120 "data_offset": 256, 00:27:51.120 "data_size": 7936 00:27:51.120 }, 00:27:51.120 { 00:27:51.120 "name": "pt2", 00:27:51.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:51.120 "is_configured": true, 00:27:51.120 "data_offset": 256, 00:27:51.120 "data_size": 7936 00:27:51.120 } 00:27:51.120 ] 00:27:51.120 }' 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.120 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.382 [2024-12-09 23:09:26.584392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:51.382 [2024-12-09 23:09:26.584418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:51.382 [2024-12-09 23:09:26.584471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:51.382 [2024-12-09 23:09:26.584528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:51.382 [2024-12-09 23:09:26.584535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.382 [2024-12-09 23:09:26.624438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:51.382 [2024-12-09 23:09:26.624487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.382 [2024-12-09 23:09:26.624501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:51.382 [2024-12-09 23:09:26.624517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.382 [2024-12-09 23:09:26.626215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.382 [2024-12-09 23:09:26.626242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:51.382 [2024-12-09 23:09:26.626287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:51.382 [2024-12-09 23:09:26.626324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:51.382 [2024-12-09 23:09:26.626421] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:51.382 [2024-12-09 23:09:26.626433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:51.382 [2024-12-09 23:09:26.626448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:51.382 [2024-12-09 23:09:26.626496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:51.382 [2024-12-09 23:09:26.626548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:51.382 [2024-12-09 23:09:26.626555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:51.382 [2024-12-09 23:09:26.626607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:51.382 [2024-12-09 23:09:26.626689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:51.382 [2024-12-09 23:09:26.626703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:51.382 [2024-12-09 23:09:26.626781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.382 pt1 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.382 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.383 "name": "raid_bdev1", 00:27:51.383 "uuid": "f18828cb-078c-4f1a-8cef-712a68d902dd", 00:27:51.383 "strip_size_kb": 0, 00:27:51.383 "state": "online", 00:27:51.383 "raid_level": "raid1", 00:27:51.383 "superblock": true, 00:27:51.383 "num_base_bdevs": 2, 00:27:51.383 "num_base_bdevs_discovered": 1, 00:27:51.383 "num_base_bdevs_operational": 1, 00:27:51.383 "base_bdevs_list": [ 00:27:51.383 { 00:27:51.383 "name": null, 00:27:51.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.383 "is_configured": false, 00:27:51.383 "data_offset": 256, 00:27:51.383 "data_size": 7936 00:27:51.383 }, 00:27:51.383 { 00:27:51.383 "name": "pt2", 00:27:51.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:51.383 "is_configured": true, 00:27:51.383 "data_offset": 256, 00:27:51.383 "data_size": 7936 00:27:51.383 } 00:27:51.383 ] 00:27:51.383 }' 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.383 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 [2024-12-09 23:09:26.968704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' f18828cb-078c-4f1a-8cef-712a68d902dd '!=' f18828cb-078c-4f1a-8cef-712a68d902dd ']' 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 84941 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84941 ']' 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 84941 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.641 23:09:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84941 00:27:51.900 killing process with pid 84941 00:27:51.900 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:51.900 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:51.900 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84941' 00:27:51.900 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 84941 00:27:51.900 [2024-12-09 23:09:27.019527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:51.900 [2024-12-09 23:09:27.019597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:51.900 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 84941 00:27:51.900 [2024-12-09 23:09:27.019637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:51.900 [2024-12-09 23:09:27.019652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:51.900 [2024-12-09 23:09:27.131126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:52.469 ************************************ 00:27:52.469 END TEST raid_superblock_test_md_separate 00:27:52.469 ************************************ 00:27:52.469 23:09:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:27:52.469 00:27:52.469 real 0m4.326s 00:27:52.469 user 0m6.630s 00:27:52.469 sys 0m0.730s 00:27:52.469 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.469 23:09:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:52.469 23:09:27 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:27:52.469 23:09:27 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:27:52.469 23:09:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:52.469 23:09:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.469 23:09:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:52.469 ************************************ 00:27:52.469 START TEST raid_rebuild_test_sb_md_separate 00:27:52.469 ************************************ 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=85247 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 85247 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 85247 ']' 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:52.469 23:09:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:52.728 [2024-12-09 23:09:27.830524] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:27:52.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:52.728 Zero copy mechanism will not be used. 00:27:52.728 [2024-12-09 23:09:27.830651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85247 ] 00:27:52.728 [2024-12-09 23:09:27.985140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.986 [2024-12-09 23:09:28.099520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.986 [2024-12-09 23:09:28.239282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.986 [2024-12-09 23:09:28.239339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.557 BaseBdev1_malloc 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.557 [2024-12-09 23:09:28.727759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:53.557 [2024-12-09 23:09:28.727820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.557 [2024-12-09 23:09:28.727841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:53.557 [2024-12-09 23:09:28.727854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.557 [2024-12-09 23:09:28.729921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.557 [2024-12-09 23:09:28.729963] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:53.557 BaseBdev1 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.557 BaseBdev2_malloc 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.557 [2024-12-09 23:09:28.764998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:53.557 [2024-12-09 23:09:28.765064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.557 [2024-12-09 23:09:28.765082] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:53.557 [2024-12-09 23:09:28.765094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.557 [2024-12-09 23:09:28.767119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.557 [2024-12-09 23:09:28.767158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:53.557 BaseBdev2 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.557 spare_malloc 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.557 spare_delay 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:53.557 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.558 [2024-12-09 23:09:28.824804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:53.558 [2024-12-09 23:09:28.824871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.558 [2024-12-09 23:09:28.824894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:53.558 [2024-12-09 23:09:28.824904] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.558 [2024-12-09 23:09:28.826952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.558 [2024-12-09 23:09:28.826993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:53.558 spare 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.558 [2024-12-09 23:09:28.832853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.558 [2024-12-09 23:09:28.834776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.558 [2024-12-09 23:09:28.834965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:53.558 [2024-12-09 23:09:28.834979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:53.558 [2024-12-09 23:09:28.835084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:53.558 [2024-12-09 23:09:28.835227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:53.558 [2024-12-09 23:09:28.835238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:53.558 [2024-12-09 23:09:28.835346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.558 "name": "raid_bdev1", 00:27:53.558 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:53.558 "strip_size_kb": 0, 00:27:53.558 "state": "online", 00:27:53.558 "raid_level": "raid1", 00:27:53.558 "superblock": true, 00:27:53.558 "num_base_bdevs": 2, 00:27:53.558 "num_base_bdevs_discovered": 2, 00:27:53.558 "num_base_bdevs_operational": 2, 00:27:53.558 "base_bdevs_list": [ 00:27:53.558 { 00:27:53.558 "name": "BaseBdev1", 00:27:53.558 "uuid": "b1aeb030-6f1d-5d2f-93c0-ae6f4de142cd", 00:27:53.558 "is_configured": true, 00:27:53.558 "data_offset": 256, 00:27:53.558 "data_size": 7936 00:27:53.558 }, 00:27:53.558 { 00:27:53.558 "name": "BaseBdev2", 00:27:53.558 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:53.558 "is_configured": true, 00:27:53.558 "data_offset": 256, 00:27:53.558 "data_size": 7936 00:27:53.558 } 00:27:53.558 ] 00:27:53.558 }' 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.558 23:09:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:53.824 [2024-12-09 23:09:29.149212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.824 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:54.084 [2024-12-09 23:09:29.401013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:54.084 /dev/nbd0 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:54.084 1+0 records in 00:27:54.084 1+0 records out 00:27:54.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330623 s, 12.4 MB/s 00:27:54.084 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:54.343 23:09:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:54.915 7936+0 records in 00:27:54.915 7936+0 records out 00:27:54.915 32505856 bytes (33 MB, 31 MiB) copied, 0.723386 s, 44.9 MB/s 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:54.915 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:55.177 [2024-12-09 23:09:30.330508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:55.177 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:55.177 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:55.177 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:55.177 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:55.177 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.178 [2024-12-09 23:09:30.354591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:55.178 "name": "raid_bdev1", 00:27:55.178 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:55.178 "strip_size_kb": 0, 00:27:55.178 "state": "online", 00:27:55.178 "raid_level": "raid1", 00:27:55.178 "superblock": true, 00:27:55.178 "num_base_bdevs": 2, 00:27:55.178 "num_base_bdevs_discovered": 1, 00:27:55.178 "num_base_bdevs_operational": 1, 00:27:55.178 "base_bdevs_list": [ 00:27:55.178 { 00:27:55.178 "name": null, 00:27:55.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.178 "is_configured": false, 00:27:55.178 "data_offset": 0, 00:27:55.178 "data_size": 7936 00:27:55.178 }, 00:27:55.178 { 00:27:55.178 "name": "BaseBdev2", 00:27:55.178 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:55.178 "is_configured": true, 00:27:55.178 "data_offset": 256, 00:27:55.178 "data_size": 7936 00:27:55.178 } 00:27:55.178 ] 00:27:55.178 }' 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:55.178 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.448 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:55.448 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.448 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:55.448 [2024-12-09 23:09:30.674664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:55.448 [2024-12-09 23:09:30.684478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:27:55.448 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.448 23:09:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:55.448 [2024-12-09 23:09:30.686373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:56.391 "name": "raid_bdev1", 00:27:56.391 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:56.391 "strip_size_kb": 0, 00:27:56.391 "state": "online", 00:27:56.391 "raid_level": "raid1", 00:27:56.391 "superblock": true, 00:27:56.391 "num_base_bdevs": 2, 00:27:56.391 "num_base_bdevs_discovered": 2, 00:27:56.391 "num_base_bdevs_operational": 2, 00:27:56.391 "process": { 00:27:56.391 "type": "rebuild", 00:27:56.391 "target": "spare", 00:27:56.391 "progress": { 00:27:56.391 "blocks": 2560, 00:27:56.391 "percent": 32 00:27:56.391 } 00:27:56.391 }, 00:27:56.391 "base_bdevs_list": [ 00:27:56.391 { 00:27:56.391 "name": "spare", 00:27:56.391 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:27:56.391 "is_configured": true, 00:27:56.391 "data_offset": 256, 00:27:56.391 "data_size": 7936 00:27:56.391 }, 00:27:56.391 { 00:27:56.391 "name": "BaseBdev2", 00:27:56.391 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:56.391 "is_configured": true, 00:27:56.391 "data_offset": 256, 00:27:56.391 "data_size": 7936 00:27:56.391 } 00:27:56.391 ] 00:27:56.391 }' 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:56.391 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.652 [2024-12-09 23:09:31.784462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.652 [2024-12-09 23:09:31.791953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:56.652 [2024-12-09 23:09:31.792012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.652 [2024-12-09 23:09:31.792028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.652 [2024-12-09 23:09:31.792040] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.652 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.653 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.653 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:56.653 "name": "raid_bdev1", 00:27:56.653 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:56.653 "strip_size_kb": 0, 00:27:56.653 "state": "online", 00:27:56.653 "raid_level": "raid1", 00:27:56.653 "superblock": true, 00:27:56.653 "num_base_bdevs": 2, 00:27:56.653 "num_base_bdevs_discovered": 1, 00:27:56.653 "num_base_bdevs_operational": 1, 00:27:56.653 "base_bdevs_list": [ 00:27:56.653 { 00:27:56.653 "name": null, 00:27:56.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.653 "is_configured": false, 00:27:56.653 "data_offset": 0, 00:27:56.653 "data_size": 7936 00:27:56.653 }, 00:27:56.653 { 00:27:56.653 "name": "BaseBdev2", 00:27:56.653 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:56.653 "is_configured": true, 00:27:56.653 "data_offset": 256, 00:27:56.653 "data_size": 7936 00:27:56.653 } 00:27:56.653 ] 00:27:56.653 }' 00:27:56.653 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:56.653 23:09:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:56.914 "name": "raid_bdev1", 00:27:56.914 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:56.914 "strip_size_kb": 0, 00:27:56.914 "state": "online", 00:27:56.914 "raid_level": "raid1", 00:27:56.914 "superblock": true, 00:27:56.914 "num_base_bdevs": 2, 00:27:56.914 "num_base_bdevs_discovered": 1, 00:27:56.914 "num_base_bdevs_operational": 1, 00:27:56.914 "base_bdevs_list": [ 00:27:56.914 { 00:27:56.914 "name": null, 00:27:56.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.914 "is_configured": false, 00:27:56.914 "data_offset": 0, 00:27:56.914 "data_size": 7936 00:27:56.914 }, 00:27:56.914 { 00:27:56.914 "name": "BaseBdev2", 00:27:56.914 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:56.914 "is_configured": true, 00:27:56.914 "data_offset": 256, 00:27:56.914 "data_size": 7936 00:27:56.914 } 00:27:56.914 ] 00:27:56.914 }' 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:56.914 [2024-12-09 23:09:32.217994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.914 [2024-12-09 23:09:32.227129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.914 23:09:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:56.914 [2024-12-09 23:09:32.229031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:58.302 "name": "raid_bdev1", 00:27:58.302 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:58.302 "strip_size_kb": 0, 00:27:58.302 "state": "online", 00:27:58.302 "raid_level": "raid1", 00:27:58.302 "superblock": true, 00:27:58.302 "num_base_bdevs": 2, 00:27:58.302 "num_base_bdevs_discovered": 2, 00:27:58.302 "num_base_bdevs_operational": 2, 00:27:58.302 "process": { 00:27:58.302 "type": "rebuild", 00:27:58.302 "target": "spare", 00:27:58.302 "progress": { 00:27:58.302 "blocks": 2560, 00:27:58.302 "percent": 32 00:27:58.302 } 00:27:58.302 }, 00:27:58.302 "base_bdevs_list": [ 00:27:58.302 { 00:27:58.302 "name": "spare", 00:27:58.302 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:27:58.302 "is_configured": true, 00:27:58.302 "data_offset": 256, 00:27:58.302 "data_size": 7936 00:27:58.302 }, 00:27:58.302 { 00:27:58.302 "name": "BaseBdev2", 00:27:58.302 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:58.302 "is_configured": true, 00:27:58.302 "data_offset": 256, 00:27:58.302 "data_size": 7936 00:27:58.302 } 00:27:58.302 ] 00:27:58.302 }' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:58.302 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=581 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:58.302 "name": "raid_bdev1", 00:27:58.302 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:58.302 "strip_size_kb": 0, 00:27:58.302 "state": "online", 00:27:58.302 "raid_level": "raid1", 00:27:58.302 "superblock": true, 00:27:58.302 "num_base_bdevs": 2, 00:27:58.302 "num_base_bdevs_discovered": 2, 00:27:58.302 "num_base_bdevs_operational": 2, 00:27:58.302 "process": { 00:27:58.302 "type": "rebuild", 00:27:58.302 "target": "spare", 00:27:58.302 "progress": { 00:27:58.302 "blocks": 2816, 00:27:58.302 "percent": 35 00:27:58.302 } 00:27:58.302 }, 00:27:58.302 "base_bdevs_list": [ 00:27:58.302 { 00:27:58.302 "name": "spare", 00:27:58.302 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:27:58.302 "is_configured": true, 00:27:58.302 "data_offset": 256, 00:27:58.302 "data_size": 7936 00:27:58.302 }, 00:27:58.302 { 00:27:58.302 "name": "BaseBdev2", 00:27:58.302 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:58.302 "is_configured": true, 00:27:58.302 "data_offset": 256, 00:27:58.302 "data_size": 7936 00:27:58.302 } 00:27:58.302 ] 00:27:58.302 }' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:58.302 23:09:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:59.243 "name": "raid_bdev1", 00:27:59.243 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:27:59.243 "strip_size_kb": 0, 00:27:59.243 "state": "online", 00:27:59.243 "raid_level": "raid1", 00:27:59.243 "superblock": true, 00:27:59.243 "num_base_bdevs": 2, 00:27:59.243 "num_base_bdevs_discovered": 2, 00:27:59.243 "num_base_bdevs_operational": 2, 00:27:59.243 "process": { 00:27:59.243 "type": "rebuild", 00:27:59.243 "target": "spare", 00:27:59.243 "progress": { 00:27:59.243 "blocks": 5376, 00:27:59.243 "percent": 67 00:27:59.243 } 00:27:59.243 }, 00:27:59.243 "base_bdevs_list": [ 00:27:59.243 { 00:27:59.243 "name": "spare", 00:27:59.243 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:27:59.243 "is_configured": true, 00:27:59.243 "data_offset": 256, 00:27:59.243 "data_size": 7936 00:27:59.243 }, 00:27:59.243 { 00:27:59.243 "name": "BaseBdev2", 00:27:59.243 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:27:59.243 "is_configured": true, 00:27:59.243 "data_offset": 256, 00:27:59.243 "data_size": 7936 00:27:59.243 } 00:27:59.243 ] 00:27:59.243 }' 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.243 23:09:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:00.182 [2024-12-09 23:09:35.344074] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:00.182 [2024-12-09 23:09:35.344160] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:00.182 [2024-12-09 23:09:35.344262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.182 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:00.441 "name": "raid_bdev1", 00:28:00.441 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:00.441 "strip_size_kb": 0, 00:28:00.441 "state": "online", 00:28:00.441 "raid_level": "raid1", 00:28:00.441 "superblock": true, 00:28:00.441 "num_base_bdevs": 2, 00:28:00.441 "num_base_bdevs_discovered": 2, 00:28:00.441 "num_base_bdevs_operational": 2, 00:28:00.441 "base_bdevs_list": [ 00:28:00.441 { 00:28:00.441 "name": "spare", 00:28:00.441 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:00.441 "is_configured": true, 00:28:00.441 "data_offset": 256, 00:28:00.441 "data_size": 7936 00:28:00.441 }, 00:28:00.441 { 00:28:00.441 "name": "BaseBdev2", 00:28:00.441 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:00.441 "is_configured": true, 00:28:00.441 "data_offset": 256, 00:28:00.441 "data_size": 7936 00:28:00.441 } 00:28:00.441 ] 00:28:00.441 }' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:00.441 "name": "raid_bdev1", 00:28:00.441 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:00.441 "strip_size_kb": 0, 00:28:00.441 "state": "online", 00:28:00.441 "raid_level": "raid1", 00:28:00.441 "superblock": true, 00:28:00.441 "num_base_bdevs": 2, 00:28:00.441 "num_base_bdevs_discovered": 2, 00:28:00.441 "num_base_bdevs_operational": 2, 00:28:00.441 "base_bdevs_list": [ 00:28:00.441 { 00:28:00.441 "name": "spare", 00:28:00.441 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:00.441 "is_configured": true, 00:28:00.441 "data_offset": 256, 00:28:00.441 "data_size": 7936 00:28:00.441 }, 00:28:00.441 { 00:28:00.441 "name": "BaseBdev2", 00:28:00.441 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:00.441 "is_configured": true, 00:28:00.441 "data_offset": 256, 00:28:00.441 "data_size": 7936 00:28:00.441 } 00:28:00.441 ] 00:28:00.441 }' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.441 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:00.441 "name": "raid_bdev1", 00:28:00.441 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:00.441 "strip_size_kb": 0, 00:28:00.441 "state": "online", 00:28:00.441 "raid_level": "raid1", 00:28:00.442 "superblock": true, 00:28:00.442 "num_base_bdevs": 2, 00:28:00.442 "num_base_bdevs_discovered": 2, 00:28:00.442 "num_base_bdevs_operational": 2, 00:28:00.442 "base_bdevs_list": [ 00:28:00.442 { 00:28:00.442 "name": "spare", 00:28:00.442 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:00.442 "is_configured": true, 00:28:00.442 "data_offset": 256, 00:28:00.442 "data_size": 7936 00:28:00.442 }, 00:28:00.442 { 00:28:00.442 "name": "BaseBdev2", 00:28:00.442 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:00.442 "is_configured": true, 00:28:00.442 "data_offset": 256, 00:28:00.442 "data_size": 7936 00:28:00.442 } 00:28:00.442 ] 00:28:00.442 }' 00:28:00.442 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:00.442 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.701 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:00.701 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.701 23:09:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.701 [2024-12-09 23:09:36.004493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:00.701 [2024-12-09 23:09:36.004525] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:00.701 [2024-12-09 23:09:36.004585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:00.701 [2024-12-09 23:09:36.004643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:00.701 [2024-12-09 23:09:36.004652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.701 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:00.964 /dev/nbd0 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:00.964 1+0 records in 00:28:00.964 1+0 records out 00:28:00.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034888 s, 11.7 MB/s 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:00.964 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:01.227 /dev/nbd1 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:01.227 1+0 records in 00:28:01.227 1+0 records out 00:28:01.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028535 s, 14.4 MB/s 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:01.227 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:01.488 23:09:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:01.810 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:01.811 [2024-12-09 23:09:37.060767] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:01.811 [2024-12-09 23:09:37.060814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.811 [2024-12-09 23:09:37.060833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:01.811 [2024-12-09 23:09:37.060841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.811 [2024-12-09 23:09:37.062542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.811 [2024-12-09 23:09:37.062573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:01.811 [2024-12-09 23:09:37.062628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:01.811 [2024-12-09 23:09:37.062667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:01.811 [2024-12-09 23:09:37.062771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:01.811 spare 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.811 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.073 [2024-12-09 23:09:37.162843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:02.073 [2024-12-09 23:09:37.162978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:28:02.073 [2024-12-09 23:09:37.163090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:28:02.073 [2024-12-09 23:09:37.163243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:02.073 [2024-12-09 23:09:37.163251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:02.073 [2024-12-09 23:09:37.163358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.073 "name": "raid_bdev1", 00:28:02.073 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:02.073 "strip_size_kb": 0, 00:28:02.073 "state": "online", 00:28:02.073 "raid_level": "raid1", 00:28:02.073 "superblock": true, 00:28:02.073 "num_base_bdevs": 2, 00:28:02.073 "num_base_bdevs_discovered": 2, 00:28:02.073 "num_base_bdevs_operational": 2, 00:28:02.073 "base_bdevs_list": [ 00:28:02.073 { 00:28:02.073 "name": "spare", 00:28:02.073 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:02.073 "is_configured": true, 00:28:02.073 "data_offset": 256, 00:28:02.073 "data_size": 7936 00:28:02.073 }, 00:28:02.073 { 00:28:02.073 "name": "BaseBdev2", 00:28:02.073 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:02.073 "is_configured": true, 00:28:02.073 "data_offset": 256, 00:28:02.073 "data_size": 7936 00:28:02.073 } 00:28:02.073 ] 00:28:02.073 }' 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.073 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.334 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:02.334 "name": "raid_bdev1", 00:28:02.334 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:02.334 "strip_size_kb": 0, 00:28:02.334 "state": "online", 00:28:02.334 "raid_level": "raid1", 00:28:02.334 "superblock": true, 00:28:02.334 "num_base_bdevs": 2, 00:28:02.334 "num_base_bdevs_discovered": 2, 00:28:02.334 "num_base_bdevs_operational": 2, 00:28:02.334 "base_bdevs_list": [ 00:28:02.334 { 00:28:02.334 "name": "spare", 00:28:02.334 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:02.334 "is_configured": true, 00:28:02.334 "data_offset": 256, 00:28:02.335 "data_size": 7936 00:28:02.335 }, 00:28:02.335 { 00:28:02.335 "name": "BaseBdev2", 00:28:02.335 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:02.335 "is_configured": true, 00:28:02.335 "data_offset": 256, 00:28:02.335 "data_size": 7936 00:28:02.335 } 00:28:02.335 ] 00:28:02.335 }' 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.335 [2024-12-09 23:09:37.596923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.335 "name": "raid_bdev1", 00:28:02.335 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:02.335 "strip_size_kb": 0, 00:28:02.335 "state": "online", 00:28:02.335 "raid_level": "raid1", 00:28:02.335 "superblock": true, 00:28:02.335 "num_base_bdevs": 2, 00:28:02.335 "num_base_bdevs_discovered": 1, 00:28:02.335 "num_base_bdevs_operational": 1, 00:28:02.335 "base_bdevs_list": [ 00:28:02.335 { 00:28:02.335 "name": null, 00:28:02.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.335 "is_configured": false, 00:28:02.335 "data_offset": 0, 00:28:02.335 "data_size": 7936 00:28:02.335 }, 00:28:02.335 { 00:28:02.335 "name": "BaseBdev2", 00:28:02.335 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:02.335 "is_configured": true, 00:28:02.335 "data_offset": 256, 00:28:02.335 "data_size": 7936 00:28:02.335 } 00:28:02.335 ] 00:28:02.335 }' 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.335 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.599 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:02.599 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.599 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:02.599 [2024-12-09 23:09:37.912987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:02.599 [2024-12-09 23:09:37.913155] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:02.599 [2024-12-09 23:09:37.913170] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:02.599 [2024-12-09 23:09:37.913202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:02.599 [2024-12-09 23:09:37.920632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:28:02.599 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.599 23:09:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:02.599 [2024-12-09 23:09:37.922242] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:03.987 "name": "raid_bdev1", 00:28:03.987 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:03.987 "strip_size_kb": 0, 00:28:03.987 "state": "online", 00:28:03.987 "raid_level": "raid1", 00:28:03.987 "superblock": true, 00:28:03.987 "num_base_bdevs": 2, 00:28:03.987 "num_base_bdevs_discovered": 2, 00:28:03.987 "num_base_bdevs_operational": 2, 00:28:03.987 "process": { 00:28:03.987 "type": "rebuild", 00:28:03.987 "target": "spare", 00:28:03.987 "progress": { 00:28:03.987 "blocks": 2560, 00:28:03.987 "percent": 32 00:28:03.987 } 00:28:03.987 }, 00:28:03.987 "base_bdevs_list": [ 00:28:03.987 { 00:28:03.987 "name": "spare", 00:28:03.987 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:03.987 "is_configured": true, 00:28:03.987 "data_offset": 256, 00:28:03.987 "data_size": 7936 00:28:03.987 }, 00:28:03.987 { 00:28:03.987 "name": "BaseBdev2", 00:28:03.987 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:03.987 "is_configured": true, 00:28:03.987 "data_offset": 256, 00:28:03.987 "data_size": 7936 00:28:03.987 } 00:28:03.987 ] 00:28:03.987 }' 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:03.987 23:09:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.987 [2024-12-09 23:09:39.024762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:03.987 [2024-12-09 23:09:39.027606] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:03.987 [2024-12-09 23:09:39.027654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.987 [2024-12-09 23:09:39.027667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:03.987 [2024-12-09 23:09:39.027675] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.987 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.988 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:03.988 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.988 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.988 "name": "raid_bdev1", 00:28:03.988 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:03.988 "strip_size_kb": 0, 00:28:03.988 "state": "online", 00:28:03.988 "raid_level": "raid1", 00:28:03.988 "superblock": true, 00:28:03.988 "num_base_bdevs": 2, 00:28:03.988 "num_base_bdevs_discovered": 1, 00:28:03.988 "num_base_bdevs_operational": 1, 00:28:03.988 "base_bdevs_list": [ 00:28:03.988 { 00:28:03.988 "name": null, 00:28:03.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.988 "is_configured": false, 00:28:03.988 "data_offset": 0, 00:28:03.988 "data_size": 7936 00:28:03.988 }, 00:28:03.988 { 00:28:03.988 "name": "BaseBdev2", 00:28:03.988 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:03.988 "is_configured": true, 00:28:03.988 "data_offset": 256, 00:28:03.988 "data_size": 7936 00:28:03.988 } 00:28:03.988 ] 00:28:03.988 }' 00:28:03.988 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.988 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:04.249 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.249 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 [2024-12-09 23:09:39.375952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:04.249 [2024-12-09 23:09:39.376006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.249 [2024-12-09 23:09:39.376022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:04.249 [2024-12-09 23:09:39.376031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.249 [2024-12-09 23:09:39.376237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.249 [2024-12-09 23:09:39.376257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:04.249 [2024-12-09 23:09:39.376305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:04.249 [2024-12-09 23:09:39.376317] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:04.249 [2024-12-09 23:09:39.376325] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:04.249 [2024-12-09 23:09:39.376341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:04.249 [2024-12-09 23:09:39.383831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:28:04.249 spare 00:28:04.249 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.249 23:09:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:28:04.249 [2024-12-09 23:09:39.385535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:05.191 "name": "raid_bdev1", 00:28:05.191 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:05.191 "strip_size_kb": 0, 00:28:05.191 "state": "online", 00:28:05.191 "raid_level": "raid1", 00:28:05.191 "superblock": true, 00:28:05.191 "num_base_bdevs": 2, 00:28:05.191 "num_base_bdevs_discovered": 2, 00:28:05.191 "num_base_bdevs_operational": 2, 00:28:05.191 "process": { 00:28:05.191 "type": "rebuild", 00:28:05.191 "target": "spare", 00:28:05.191 "progress": { 00:28:05.191 "blocks": 2560, 00:28:05.191 "percent": 32 00:28:05.191 } 00:28:05.191 }, 00:28:05.191 "base_bdevs_list": [ 00:28:05.191 { 00:28:05.191 "name": "spare", 00:28:05.191 "uuid": "c477d0b3-6491-5123-9022-d54fff5c6db1", 00:28:05.191 "is_configured": true, 00:28:05.191 "data_offset": 256, 00:28:05.191 "data_size": 7936 00:28:05.191 }, 00:28:05.191 { 00:28:05.191 "name": "BaseBdev2", 00:28:05.191 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:05.191 "is_configured": true, 00:28:05.191 "data_offset": 256, 00:28:05.191 "data_size": 7936 00:28:05.191 } 00:28:05.191 ] 00:28:05.191 }' 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.191 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.191 [2024-12-09 23:09:40.495978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:05.452 [2024-12-09 23:09:40.591360] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:05.452 [2024-12-09 23:09:40.591572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:05.452 [2024-12-09 23:09:40.591638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:05.452 [2024-12-09 23:09:40.591659] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.452 "name": "raid_bdev1", 00:28:05.452 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:05.452 "strip_size_kb": 0, 00:28:05.452 "state": "online", 00:28:05.452 "raid_level": "raid1", 00:28:05.452 "superblock": true, 00:28:05.452 "num_base_bdevs": 2, 00:28:05.452 "num_base_bdevs_discovered": 1, 00:28:05.452 "num_base_bdevs_operational": 1, 00:28:05.452 "base_bdevs_list": [ 00:28:05.452 { 00:28:05.452 "name": null, 00:28:05.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.452 "is_configured": false, 00:28:05.452 "data_offset": 0, 00:28:05.452 "data_size": 7936 00:28:05.452 }, 00:28:05.452 { 00:28:05.452 "name": "BaseBdev2", 00:28:05.452 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:05.452 "is_configured": true, 00:28:05.452 "data_offset": 256, 00:28:05.452 "data_size": 7936 00:28:05.452 } 00:28:05.452 ] 00:28:05.452 }' 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.452 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:05.713 "name": "raid_bdev1", 00:28:05.713 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:05.713 "strip_size_kb": 0, 00:28:05.713 "state": "online", 00:28:05.713 "raid_level": "raid1", 00:28:05.713 "superblock": true, 00:28:05.713 "num_base_bdevs": 2, 00:28:05.713 "num_base_bdevs_discovered": 1, 00:28:05.713 "num_base_bdevs_operational": 1, 00:28:05.713 "base_bdevs_list": [ 00:28:05.713 { 00:28:05.713 "name": null, 00:28:05.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.713 "is_configured": false, 00:28:05.713 "data_offset": 0, 00:28:05.713 "data_size": 7936 00:28:05.713 }, 00:28:05.713 { 00:28:05.713 "name": "BaseBdev2", 00:28:05.713 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:05.713 "is_configured": true, 00:28:05.713 "data_offset": 256, 00:28:05.713 "data_size": 7936 00:28:05.713 } 00:28:05.713 ] 00:28:05.713 }' 00:28:05.713 23:09:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:05.713 [2024-12-09 23:09:41.055864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:05.713 [2024-12-09 23:09:41.055912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:05.713 [2024-12-09 23:09:41.055931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:05.713 [2024-12-09 23:09:41.055938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:05.713 [2024-12-09 23:09:41.056133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:05.713 [2024-12-09 23:09:41.056143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:05.713 [2024-12-09 23:09:41.056191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:05.713 [2024-12-09 23:09:41.056201] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:05.713 [2024-12-09 23:09:41.056208] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:05.713 [2024-12-09 23:09:41.056218] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:05.713 BaseBdev1 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.713 23:09:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.098 "name": "raid_bdev1", 00:28:07.098 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:07.098 "strip_size_kb": 0, 00:28:07.098 "state": "online", 00:28:07.098 "raid_level": "raid1", 00:28:07.098 "superblock": true, 00:28:07.098 "num_base_bdevs": 2, 00:28:07.098 "num_base_bdevs_discovered": 1, 00:28:07.098 "num_base_bdevs_operational": 1, 00:28:07.098 "base_bdevs_list": [ 00:28:07.098 { 00:28:07.098 "name": null, 00:28:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.098 "is_configured": false, 00:28:07.098 "data_offset": 0, 00:28:07.098 "data_size": 7936 00:28:07.098 }, 00:28:07.098 { 00:28:07.098 "name": "BaseBdev2", 00:28:07.098 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:07.098 "is_configured": true, 00:28:07.098 "data_offset": 256, 00:28:07.098 "data_size": 7936 00:28:07.098 } 00:28:07.098 ] 00:28:07.098 }' 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:07.098 "name": "raid_bdev1", 00:28:07.098 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:07.098 "strip_size_kb": 0, 00:28:07.098 "state": "online", 00:28:07.098 "raid_level": "raid1", 00:28:07.098 "superblock": true, 00:28:07.098 "num_base_bdevs": 2, 00:28:07.098 "num_base_bdevs_discovered": 1, 00:28:07.098 "num_base_bdevs_operational": 1, 00:28:07.098 "base_bdevs_list": [ 00:28:07.098 { 00:28:07.098 "name": null, 00:28:07.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.098 "is_configured": false, 00:28:07.098 "data_offset": 0, 00:28:07.098 "data_size": 7936 00:28:07.098 }, 00:28:07.098 { 00:28:07.098 "name": "BaseBdev2", 00:28:07.098 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:07.098 "is_configured": true, 00:28:07.098 "data_offset": 256, 00:28:07.098 "data_size": 7936 00:28:07.098 } 00:28:07.098 ] 00:28:07.098 }' 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:07.098 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:07.359 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:07.360 [2024-12-09 23:09:42.472187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.360 [2024-12-09 23:09:42.472307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:07.360 [2024-12-09 23:09:42.472319] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:07.360 request: 00:28:07.360 { 00:28:07.360 "base_bdev": "BaseBdev1", 00:28:07.360 "raid_bdev": "raid_bdev1", 00:28:07.360 "method": "bdev_raid_add_base_bdev", 00:28:07.360 "req_id": 1 00:28:07.360 } 00:28:07.360 Got JSON-RPC error response 00:28:07.360 response: 00:28:07.360 { 00:28:07.360 "code": -22, 00:28:07.360 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:07.360 } 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:07.360 23:09:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.309 "name": "raid_bdev1", 00:28:08.309 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:08.309 "strip_size_kb": 0, 00:28:08.309 "state": "online", 00:28:08.309 "raid_level": "raid1", 00:28:08.309 "superblock": true, 00:28:08.309 "num_base_bdevs": 2, 00:28:08.309 "num_base_bdevs_discovered": 1, 00:28:08.309 "num_base_bdevs_operational": 1, 00:28:08.309 "base_bdevs_list": [ 00:28:08.309 { 00:28:08.309 "name": null, 00:28:08.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.309 "is_configured": false, 00:28:08.309 "data_offset": 0, 00:28:08.309 "data_size": 7936 00:28:08.309 }, 00:28:08.309 { 00:28:08.309 "name": "BaseBdev2", 00:28:08.309 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:08.309 "is_configured": true, 00:28:08.309 "data_offset": 256, 00:28:08.309 "data_size": 7936 00:28:08.309 } 00:28:08.309 ] 00:28:08.309 }' 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.309 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:08.589 "name": "raid_bdev1", 00:28:08.589 "uuid": "874b691a-5109-45de-96ed-353ae7a39090", 00:28:08.589 "strip_size_kb": 0, 00:28:08.589 "state": "online", 00:28:08.589 "raid_level": "raid1", 00:28:08.589 "superblock": true, 00:28:08.589 "num_base_bdevs": 2, 00:28:08.589 "num_base_bdevs_discovered": 1, 00:28:08.589 "num_base_bdevs_operational": 1, 00:28:08.589 "base_bdevs_list": [ 00:28:08.589 { 00:28:08.589 "name": null, 00:28:08.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.589 "is_configured": false, 00:28:08.589 "data_offset": 0, 00:28:08.589 "data_size": 7936 00:28:08.589 }, 00:28:08.589 { 00:28:08.589 "name": "BaseBdev2", 00:28:08.589 "uuid": "143f7013-625e-5de0-9dbd-5517499a1405", 00:28:08.589 "is_configured": true, 00:28:08.589 "data_offset": 256, 00:28:08.589 "data_size": 7936 00:28:08.589 } 00:28:08.589 ] 00:28:08.589 }' 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:08.589 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 85247 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 85247 ']' 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 85247 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.590 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85247 00:28:08.851 killing process with pid 85247 00:28:08.851 Received shutdown signal, test time was about 60.000000 seconds 00:28:08.851 00:28:08.851 Latency(us) 00:28:08.851 [2024-12-09T23:09:44.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.851 [2024-12-09T23:09:44.214Z] =================================================================================================================== 00:28:08.851 [2024-12-09T23:09:44.214Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:08.851 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:08.851 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:08.851 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85247' 00:28:08.851 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 85247 00:28:08.851 [2024-12-09 23:09:43.961867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:08.851 23:09:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 85247 00:28:08.851 [2024-12-09 23:09:43.961972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:08.851 [2024-12-09 23:09:43.962010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:08.851 [2024-12-09 23:09:43.962020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:08.851 [2024-12-09 23:09:44.123128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:09.421 23:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:28:09.421 00:28:09.421 real 0m16.938s 00:28:09.421 user 0m21.517s 00:28:09.421 sys 0m1.890s 00:28:09.421 23:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.421 23:09:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:28:09.421 ************************************ 00:28:09.421 END TEST raid_rebuild_test_sb_md_separate 00:28:09.421 ************************************ 00:28:09.421 23:09:44 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:28:09.421 23:09:44 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:28:09.421 23:09:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:09.421 23:09:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.421 23:09:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:09.421 ************************************ 00:28:09.421 START TEST raid_state_function_test_sb_md_interleaved 00:28:09.421 ************************************ 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=85914 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85914' 00:28:09.421 Process raid pid: 85914 00:28:09.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 85914 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 85914 ']' 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.421 23:09:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.682 [2024-12-09 23:09:44.813415] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:09.682 [2024-12-09 23:09:44.813653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.682 [2024-12-09 23:09:44.967370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.941 [2024-12-09 23:09:45.070287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.941 [2024-12-09 23:09:45.208818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:09.941 [2024-12-09 23:09:45.208990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.526 [2024-12-09 23:09:45.704645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:10.526 [2024-12-09 23:09:45.704699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:10.526 [2024-12-09 23:09:45.704709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:10.526 [2024-12-09 23:09:45.704718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.526 "name": "Existed_Raid", 00:28:10.526 "uuid": "f901b384-4b13-444c-aecd-9ef6dc6092a2", 00:28:10.526 "strip_size_kb": 0, 00:28:10.526 "state": "configuring", 00:28:10.526 "raid_level": "raid1", 00:28:10.526 "superblock": true, 00:28:10.526 "num_base_bdevs": 2, 00:28:10.526 "num_base_bdevs_discovered": 0, 00:28:10.526 "num_base_bdevs_operational": 2, 00:28:10.526 "base_bdevs_list": [ 00:28:10.526 { 00:28:10.526 "name": "BaseBdev1", 00:28:10.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.526 "is_configured": false, 00:28:10.526 "data_offset": 0, 00:28:10.526 "data_size": 0 00:28:10.526 }, 00:28:10.526 { 00:28:10.526 "name": "BaseBdev2", 00:28:10.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.526 "is_configured": false, 00:28:10.526 "data_offset": 0, 00:28:10.526 "data_size": 0 00:28:10.526 } 00:28:10.526 ] 00:28:10.526 }' 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.526 23:09:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.808 [2024-12-09 23:09:46.036669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:10.808 [2024-12-09 23:09:46.036705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.808 [2024-12-09 23:09:46.044659] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:10.808 [2024-12-09 23:09:46.044698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:10.808 [2024-12-09 23:09:46.044707] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:10.808 [2024-12-09 23:09:46.044718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.808 [2024-12-09 23:09:46.077121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:10.808 BaseBdev1 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.808 [ 00:28:10.808 { 00:28:10.808 "name": "BaseBdev1", 00:28:10.808 "aliases": [ 00:28:10.808 "fb8c9a12-2af3-4b5a-8e59-515d662df298" 00:28:10.808 ], 00:28:10.808 "product_name": "Malloc disk", 00:28:10.808 "block_size": 4128, 00:28:10.808 "num_blocks": 8192, 00:28:10.808 "uuid": "fb8c9a12-2af3-4b5a-8e59-515d662df298", 00:28:10.808 "md_size": 32, 00:28:10.808 "md_interleave": true, 00:28:10.808 "dif_type": 0, 00:28:10.808 "assigned_rate_limits": { 00:28:10.808 "rw_ios_per_sec": 0, 00:28:10.808 "rw_mbytes_per_sec": 0, 00:28:10.808 "r_mbytes_per_sec": 0, 00:28:10.808 "w_mbytes_per_sec": 0 00:28:10.808 }, 00:28:10.808 "claimed": true, 00:28:10.808 "claim_type": "exclusive_write", 00:28:10.808 "zoned": false, 00:28:10.808 "supported_io_types": { 00:28:10.808 "read": true, 00:28:10.808 "write": true, 00:28:10.808 "unmap": true, 00:28:10.808 "flush": true, 00:28:10.808 "reset": true, 00:28:10.808 "nvme_admin": false, 00:28:10.808 "nvme_io": false, 00:28:10.808 "nvme_io_md": false, 00:28:10.808 "write_zeroes": true, 00:28:10.808 "zcopy": true, 00:28:10.808 "get_zone_info": false, 00:28:10.808 "zone_management": false, 00:28:10.808 "zone_append": false, 00:28:10.808 "compare": false, 00:28:10.808 "compare_and_write": false, 00:28:10.808 "abort": true, 00:28:10.808 "seek_hole": false, 00:28:10.808 "seek_data": false, 00:28:10.808 "copy": true, 00:28:10.808 "nvme_iov_md": false 00:28:10.808 }, 00:28:10.808 "memory_domains": [ 00:28:10.808 { 00:28:10.808 "dma_device_id": "system", 00:28:10.808 "dma_device_type": 1 00:28:10.808 }, 00:28:10.808 { 00:28:10.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.808 "dma_device_type": 2 00:28:10.808 } 00:28:10.808 ], 00:28:10.808 "driver_specific": {} 00:28:10.808 } 00:28:10.808 ] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:10.808 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.809 "name": "Existed_Raid", 00:28:10.809 "uuid": "019c4a5c-2bae-4fc3-a40b-78673d810d25", 00:28:10.809 "strip_size_kb": 0, 00:28:10.809 "state": "configuring", 00:28:10.809 "raid_level": "raid1", 00:28:10.809 "superblock": true, 00:28:10.809 "num_base_bdevs": 2, 00:28:10.809 "num_base_bdevs_discovered": 1, 00:28:10.809 "num_base_bdevs_operational": 2, 00:28:10.809 "base_bdevs_list": [ 00:28:10.809 { 00:28:10.809 "name": "BaseBdev1", 00:28:10.809 "uuid": "fb8c9a12-2af3-4b5a-8e59-515d662df298", 00:28:10.809 "is_configured": true, 00:28:10.809 "data_offset": 256, 00:28:10.809 "data_size": 7936 00:28:10.809 }, 00:28:10.809 { 00:28:10.809 "name": "BaseBdev2", 00:28:10.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.809 "is_configured": false, 00:28:10.809 "data_offset": 0, 00:28:10.809 "data_size": 0 00:28:10.809 } 00:28:10.809 ] 00:28:10.809 }' 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.809 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.387 [2024-12-09 23:09:46.449269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:11.387 [2024-12-09 23:09:46.449313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.387 [2024-12-09 23:09:46.457336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:11.387 [2024-12-09 23:09:46.459195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:11.387 [2024-12-09 23:09:46.459346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.387 "name": "Existed_Raid", 00:28:11.387 "uuid": "c2b1142a-87f0-45bb-98bc-d7ba4f1cabf6", 00:28:11.387 "strip_size_kb": 0, 00:28:11.387 "state": "configuring", 00:28:11.387 "raid_level": "raid1", 00:28:11.387 "superblock": true, 00:28:11.387 "num_base_bdevs": 2, 00:28:11.387 "num_base_bdevs_discovered": 1, 00:28:11.387 "num_base_bdevs_operational": 2, 00:28:11.387 "base_bdevs_list": [ 00:28:11.387 { 00:28:11.387 "name": "BaseBdev1", 00:28:11.387 "uuid": "fb8c9a12-2af3-4b5a-8e59-515d662df298", 00:28:11.387 "is_configured": true, 00:28:11.387 "data_offset": 256, 00:28:11.387 "data_size": 7936 00:28:11.387 }, 00:28:11.387 { 00:28:11.387 "name": "BaseBdev2", 00:28:11.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.387 "is_configured": false, 00:28:11.387 "data_offset": 0, 00:28:11.387 "data_size": 0 00:28:11.387 } 00:28:11.387 ] 00:28:11.387 }' 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.387 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.649 [2024-12-09 23:09:46.807805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:11.649 [2024-12-09 23:09:46.807983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:11.649 [2024-12-09 23:09:46.807997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:11.649 [2024-12-09 23:09:46.808074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:11.649 [2024-12-09 23:09:46.808168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:11.649 [2024-12-09 23:09:46.808181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:11.649 BaseBdev2 00:28:11.649 [2024-12-09 23:09:46.808237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.649 [ 00:28:11.649 { 00:28:11.649 "name": "BaseBdev2", 00:28:11.649 "aliases": [ 00:28:11.649 "1b3a3c6b-614f-4ad9-8df5-b04e9f55eedb" 00:28:11.649 ], 00:28:11.649 "product_name": "Malloc disk", 00:28:11.649 "block_size": 4128, 00:28:11.649 "num_blocks": 8192, 00:28:11.649 "uuid": "1b3a3c6b-614f-4ad9-8df5-b04e9f55eedb", 00:28:11.649 "md_size": 32, 00:28:11.649 "md_interleave": true, 00:28:11.649 "dif_type": 0, 00:28:11.649 "assigned_rate_limits": { 00:28:11.649 "rw_ios_per_sec": 0, 00:28:11.649 "rw_mbytes_per_sec": 0, 00:28:11.649 "r_mbytes_per_sec": 0, 00:28:11.649 "w_mbytes_per_sec": 0 00:28:11.649 }, 00:28:11.649 "claimed": true, 00:28:11.649 "claim_type": "exclusive_write", 00:28:11.649 "zoned": false, 00:28:11.649 "supported_io_types": { 00:28:11.649 "read": true, 00:28:11.649 "write": true, 00:28:11.649 "unmap": true, 00:28:11.649 "flush": true, 00:28:11.649 "reset": true, 00:28:11.649 "nvme_admin": false, 00:28:11.649 "nvme_io": false, 00:28:11.649 "nvme_io_md": false, 00:28:11.649 "write_zeroes": true, 00:28:11.649 "zcopy": true, 00:28:11.649 "get_zone_info": false, 00:28:11.649 "zone_management": false, 00:28:11.649 "zone_append": false, 00:28:11.649 "compare": false, 00:28:11.649 "compare_and_write": false, 00:28:11.649 "abort": true, 00:28:11.649 "seek_hole": false, 00:28:11.649 "seek_data": false, 00:28:11.649 "copy": true, 00:28:11.649 "nvme_iov_md": false 00:28:11.649 }, 00:28:11.649 "memory_domains": [ 00:28:11.649 { 00:28:11.649 "dma_device_id": "system", 00:28:11.649 "dma_device_type": 1 00:28:11.649 }, 00:28:11.649 { 00:28:11.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.649 "dma_device_type": 2 00:28:11.649 } 00:28:11.649 ], 00:28:11.649 "driver_specific": {} 00:28:11.649 } 00:28:11.649 ] 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.649 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.650 "name": "Existed_Raid", 00:28:11.650 "uuid": "c2b1142a-87f0-45bb-98bc-d7ba4f1cabf6", 00:28:11.650 "strip_size_kb": 0, 00:28:11.650 "state": "online", 00:28:11.650 "raid_level": "raid1", 00:28:11.650 "superblock": true, 00:28:11.650 "num_base_bdevs": 2, 00:28:11.650 "num_base_bdevs_discovered": 2, 00:28:11.650 "num_base_bdevs_operational": 2, 00:28:11.650 "base_bdevs_list": [ 00:28:11.650 { 00:28:11.650 "name": "BaseBdev1", 00:28:11.650 "uuid": "fb8c9a12-2af3-4b5a-8e59-515d662df298", 00:28:11.650 "is_configured": true, 00:28:11.650 "data_offset": 256, 00:28:11.650 "data_size": 7936 00:28:11.650 }, 00:28:11.650 { 00:28:11.650 "name": "BaseBdev2", 00:28:11.650 "uuid": "1b3a3c6b-614f-4ad9-8df5-b04e9f55eedb", 00:28:11.650 "is_configured": true, 00:28:11.650 "data_offset": 256, 00:28:11.650 "data_size": 7936 00:28:11.650 } 00:28:11.650 ] 00:28:11.650 }' 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.650 23:09:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:11.911 [2024-12-09 23:09:47.160256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.911 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:11.911 "name": "Existed_Raid", 00:28:11.911 "aliases": [ 00:28:11.911 "c2b1142a-87f0-45bb-98bc-d7ba4f1cabf6" 00:28:11.911 ], 00:28:11.911 "product_name": "Raid Volume", 00:28:11.911 "block_size": 4128, 00:28:11.911 "num_blocks": 7936, 00:28:11.911 "uuid": "c2b1142a-87f0-45bb-98bc-d7ba4f1cabf6", 00:28:11.911 "md_size": 32, 00:28:11.911 "md_interleave": true, 00:28:11.911 "dif_type": 0, 00:28:11.911 "assigned_rate_limits": { 00:28:11.911 "rw_ios_per_sec": 0, 00:28:11.911 "rw_mbytes_per_sec": 0, 00:28:11.911 "r_mbytes_per_sec": 0, 00:28:11.911 "w_mbytes_per_sec": 0 00:28:11.911 }, 00:28:11.911 "claimed": false, 00:28:11.911 "zoned": false, 00:28:11.911 "supported_io_types": { 00:28:11.911 "read": true, 00:28:11.912 "write": true, 00:28:11.912 "unmap": false, 00:28:11.912 "flush": false, 00:28:11.912 "reset": true, 00:28:11.912 "nvme_admin": false, 00:28:11.912 "nvme_io": false, 00:28:11.912 "nvme_io_md": false, 00:28:11.912 "write_zeroes": true, 00:28:11.912 "zcopy": false, 00:28:11.912 "get_zone_info": false, 00:28:11.912 "zone_management": false, 00:28:11.912 "zone_append": false, 00:28:11.912 "compare": false, 00:28:11.912 "compare_and_write": false, 00:28:11.912 "abort": false, 00:28:11.912 "seek_hole": false, 00:28:11.912 "seek_data": false, 00:28:11.912 "copy": false, 00:28:11.912 "nvme_iov_md": false 00:28:11.912 }, 00:28:11.912 "memory_domains": [ 00:28:11.912 { 00:28:11.912 "dma_device_id": "system", 00:28:11.912 "dma_device_type": 1 00:28:11.912 }, 00:28:11.912 { 00:28:11.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.912 "dma_device_type": 2 00:28:11.912 }, 00:28:11.912 { 00:28:11.912 "dma_device_id": "system", 00:28:11.912 "dma_device_type": 1 00:28:11.912 }, 00:28:11.912 { 00:28:11.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.912 "dma_device_type": 2 00:28:11.912 } 00:28:11.912 ], 00:28:11.912 "driver_specific": { 00:28:11.912 "raid": { 00:28:11.912 "uuid": "c2b1142a-87f0-45bb-98bc-d7ba4f1cabf6", 00:28:11.912 "strip_size_kb": 0, 00:28:11.912 "state": "online", 00:28:11.912 "raid_level": "raid1", 00:28:11.912 "superblock": true, 00:28:11.912 "num_base_bdevs": 2, 00:28:11.912 "num_base_bdevs_discovered": 2, 00:28:11.912 "num_base_bdevs_operational": 2, 00:28:11.912 "base_bdevs_list": [ 00:28:11.912 { 00:28:11.912 "name": "BaseBdev1", 00:28:11.912 "uuid": "fb8c9a12-2af3-4b5a-8e59-515d662df298", 00:28:11.912 "is_configured": true, 00:28:11.912 "data_offset": 256, 00:28:11.912 "data_size": 7936 00:28:11.912 }, 00:28:11.912 { 00:28:11.912 "name": "BaseBdev2", 00:28:11.912 "uuid": "1b3a3c6b-614f-4ad9-8df5-b04e9f55eedb", 00:28:11.912 "is_configured": true, 00:28:11.912 "data_offset": 256, 00:28:11.912 "data_size": 7936 00:28:11.912 } 00:28:11.912 ] 00:28:11.912 } 00:28:11.912 } 00:28:11.912 }' 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:11.912 BaseBdev2' 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.912 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.174 [2024-12-09 23:09:47.320022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:12.174 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.175 "name": "Existed_Raid", 00:28:12.175 "uuid": "c2b1142a-87f0-45bb-98bc-d7ba4f1cabf6", 00:28:12.175 "strip_size_kb": 0, 00:28:12.175 "state": "online", 00:28:12.175 "raid_level": "raid1", 00:28:12.175 "superblock": true, 00:28:12.175 "num_base_bdevs": 2, 00:28:12.175 "num_base_bdevs_discovered": 1, 00:28:12.175 "num_base_bdevs_operational": 1, 00:28:12.175 "base_bdevs_list": [ 00:28:12.175 { 00:28:12.175 "name": null, 00:28:12.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.175 "is_configured": false, 00:28:12.175 "data_offset": 0, 00:28:12.175 "data_size": 7936 00:28:12.175 }, 00:28:12.175 { 00:28:12.175 "name": "BaseBdev2", 00:28:12.175 "uuid": "1b3a3c6b-614f-4ad9-8df5-b04e9f55eedb", 00:28:12.175 "is_configured": true, 00:28:12.175 "data_offset": 256, 00:28:12.175 "data_size": 7936 00:28:12.175 } 00:28:12.175 ] 00:28:12.175 }' 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.175 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.444 [2024-12-09 23:09:47.711941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:12.444 [2024-12-09 23:09:47.712041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:12.444 [2024-12-09 23:09:47.771427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:12.444 [2024-12-09 23:09:47.771476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:12.444 [2024-12-09 23:09:47.771487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.444 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:12.445 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:12.445 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.445 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:12.445 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.445 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.445 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 85914 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 85914 ']' 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 85914 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85914 00:28:12.729 killing process with pid 85914 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85914' 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 85914 00:28:12.729 [2024-12-09 23:09:47.832312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:12.729 23:09:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 85914 00:28:12.729 [2024-12-09 23:09:47.842748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:13.302 23:09:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:28:13.302 00:28:13.302 real 0m3.817s 00:28:13.302 user 0m5.498s 00:28:13.302 sys 0m0.622s 00:28:13.302 ************************************ 00:28:13.302 END TEST raid_state_function_test_sb_md_interleaved 00:28:13.302 ************************************ 00:28:13.302 23:09:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.302 23:09:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.302 23:09:48 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:13.302 23:09:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:13.302 23:09:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.302 23:09:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:13.302 ************************************ 00:28:13.302 START TEST raid_superblock_test_md_interleaved 00:28:13.302 ************************************ 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=86151 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 86151 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 86151 ']' 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.302 23:09:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:13.563 [2024-12-09 23:09:48.687564] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:13.563 [2024-12-09 23:09:48.687852] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86151 ] 00:28:13.563 [2024-12-09 23:09:48.847753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.824 [2024-12-09 23:09:48.951444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.824 [2024-12-09 23:09:49.090459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:13.824 [2024-12-09 23:09:49.090645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.395 malloc1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.395 [2024-12-09 23:09:49.568961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:14.395 [2024-12-09 23:09:49.569023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.395 [2024-12-09 23:09:49.569051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:14.395 [2024-12-09 23:09:49.569064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.395 [2024-12-09 23:09:49.571121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.395 [2024-12-09 23:09:49.571158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:14.395 pt1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.395 malloc2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.395 [2024-12-09 23:09:49.605204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:14.395 [2024-12-09 23:09:49.605262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.395 [2024-12-09 23:09:49.605288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:14.395 [2024-12-09 23:09:49.605301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.395 [2024-12-09 23:09:49.607264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.395 [2024-12-09 23:09:49.607301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:14.395 pt2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.395 [2024-12-09 23:09:49.613241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:14.395 [2024-12-09 23:09:49.615175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.395 [2024-12-09 23:09:49.615362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:14.395 [2024-12-09 23:09:49.615374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:14.395 [2024-12-09 23:09:49.615451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:14.395 [2024-12-09 23:09:49.615520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:14.395 [2024-12-09 23:09:49.615530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:14.395 [2024-12-09 23:09:49.615601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.395 "name": "raid_bdev1", 00:28:14.395 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:14.395 "strip_size_kb": 0, 00:28:14.395 "state": "online", 00:28:14.395 "raid_level": "raid1", 00:28:14.395 "superblock": true, 00:28:14.395 "num_base_bdevs": 2, 00:28:14.395 "num_base_bdevs_discovered": 2, 00:28:14.395 "num_base_bdevs_operational": 2, 00:28:14.395 "base_bdevs_list": [ 00:28:14.395 { 00:28:14.395 "name": "pt1", 00:28:14.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:14.395 "is_configured": true, 00:28:14.395 "data_offset": 256, 00:28:14.395 "data_size": 7936 00:28:14.395 }, 00:28:14.395 { 00:28:14.395 "name": "pt2", 00:28:14.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.395 "is_configured": true, 00:28:14.395 "data_offset": 256, 00:28:14.395 "data_size": 7936 00:28:14.395 } 00:28:14.395 ] 00:28:14.395 }' 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.395 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:14.657 [2024-12-09 23:09:49.945606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.657 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:14.657 "name": "raid_bdev1", 00:28:14.657 "aliases": [ 00:28:14.657 "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc" 00:28:14.657 ], 00:28:14.657 "product_name": "Raid Volume", 00:28:14.657 "block_size": 4128, 00:28:14.657 "num_blocks": 7936, 00:28:14.657 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:14.657 "md_size": 32, 00:28:14.658 "md_interleave": true, 00:28:14.658 "dif_type": 0, 00:28:14.658 "assigned_rate_limits": { 00:28:14.658 "rw_ios_per_sec": 0, 00:28:14.658 "rw_mbytes_per_sec": 0, 00:28:14.658 "r_mbytes_per_sec": 0, 00:28:14.658 "w_mbytes_per_sec": 0 00:28:14.658 }, 00:28:14.658 "claimed": false, 00:28:14.658 "zoned": false, 00:28:14.658 "supported_io_types": { 00:28:14.658 "read": true, 00:28:14.658 "write": true, 00:28:14.658 "unmap": false, 00:28:14.658 "flush": false, 00:28:14.658 "reset": true, 00:28:14.658 "nvme_admin": false, 00:28:14.658 "nvme_io": false, 00:28:14.658 "nvme_io_md": false, 00:28:14.658 "write_zeroes": true, 00:28:14.658 "zcopy": false, 00:28:14.658 "get_zone_info": false, 00:28:14.658 "zone_management": false, 00:28:14.658 "zone_append": false, 00:28:14.658 "compare": false, 00:28:14.658 "compare_and_write": false, 00:28:14.658 "abort": false, 00:28:14.658 "seek_hole": false, 00:28:14.658 "seek_data": false, 00:28:14.658 "copy": false, 00:28:14.658 "nvme_iov_md": false 00:28:14.658 }, 00:28:14.658 "memory_domains": [ 00:28:14.658 { 00:28:14.658 "dma_device_id": "system", 00:28:14.658 "dma_device_type": 1 00:28:14.658 }, 00:28:14.658 { 00:28:14.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.658 "dma_device_type": 2 00:28:14.658 }, 00:28:14.658 { 00:28:14.658 "dma_device_id": "system", 00:28:14.658 "dma_device_type": 1 00:28:14.658 }, 00:28:14.658 { 00:28:14.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.658 "dma_device_type": 2 00:28:14.658 } 00:28:14.658 ], 00:28:14.658 "driver_specific": { 00:28:14.658 "raid": { 00:28:14.658 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:14.658 "strip_size_kb": 0, 00:28:14.658 "state": "online", 00:28:14.658 "raid_level": "raid1", 00:28:14.658 "superblock": true, 00:28:14.658 "num_base_bdevs": 2, 00:28:14.658 "num_base_bdevs_discovered": 2, 00:28:14.658 "num_base_bdevs_operational": 2, 00:28:14.658 "base_bdevs_list": [ 00:28:14.658 { 00:28:14.658 "name": "pt1", 00:28:14.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:14.658 "is_configured": true, 00:28:14.658 "data_offset": 256, 00:28:14.658 "data_size": 7936 00:28:14.658 }, 00:28:14.658 { 00:28:14.658 "name": "pt2", 00:28:14.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.658 "is_configured": true, 00:28:14.658 "data_offset": 256, 00:28:14.658 "data_size": 7936 00:28:14.658 } 00:28:14.658 ] 00:28:14.658 } 00:28:14.658 } 00:28:14.658 }' 00:28:14.658 23:09:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:14.658 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:14.658 pt2' 00:28:14.658 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 [2024-12-09 23:09:50.121636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c55670b8-c39d-4cd6-838f-3ab9ed61f6bc 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z c55670b8-c39d-4cd6-838f-3ab9ed61f6bc ']' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 [2024-12-09 23:09:50.153321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:14.920 [2024-12-09 23:09:50.153428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.920 [2024-12-09 23:09:50.153519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.920 [2024-12-09 23:09:50.153582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.920 [2024-12-09 23:09:50.153593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:14.920 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.921 [2024-12-09 23:09:50.245360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:14.921 [2024-12-09 23:09:50.247310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:14.921 [2024-12-09 23:09:50.247378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:14.921 [2024-12-09 23:09:50.247428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:14.921 [2024-12-09 23:09:50.247443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:14.921 [2024-12-09 23:09:50.247453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:14.921 request: 00:28:14.921 { 00:28:14.921 "name": "raid_bdev1", 00:28:14.921 "raid_level": "raid1", 00:28:14.921 "base_bdevs": [ 00:28:14.921 "malloc1", 00:28:14.921 "malloc2" 00:28:14.921 ], 00:28:14.921 "superblock": false, 00:28:14.921 "method": "bdev_raid_create", 00:28:14.921 "req_id": 1 00:28:14.921 } 00:28:14.921 Got JSON-RPC error response 00:28:14.921 response: 00:28:14.921 { 00:28:14.921 "code": -17, 00:28:14.921 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:14.921 } 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.921 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.198 [2024-12-09 23:09:50.285362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:15.198 [2024-12-09 23:09:50.285420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.198 [2024-12-09 23:09:50.285435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:15.198 [2024-12-09 23:09:50.285445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.198 [2024-12-09 23:09:50.287397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.198 [2024-12-09 23:09:50.287432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:15.198 [2024-12-09 23:09:50.287485] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:15.198 [2024-12-09 23:09:50.287542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:15.198 pt1 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.198 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.199 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.199 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.199 "name": "raid_bdev1", 00:28:15.199 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:15.199 "strip_size_kb": 0, 00:28:15.199 "state": "configuring", 00:28:15.199 "raid_level": "raid1", 00:28:15.199 "superblock": true, 00:28:15.199 "num_base_bdevs": 2, 00:28:15.199 "num_base_bdevs_discovered": 1, 00:28:15.199 "num_base_bdevs_operational": 2, 00:28:15.199 "base_bdevs_list": [ 00:28:15.199 { 00:28:15.199 "name": "pt1", 00:28:15.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.199 "is_configured": true, 00:28:15.199 "data_offset": 256, 00:28:15.199 "data_size": 7936 00:28:15.199 }, 00:28:15.199 { 00:28:15.199 "name": null, 00:28:15.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.199 "is_configured": false, 00:28:15.199 "data_offset": 256, 00:28:15.199 "data_size": 7936 00:28:15.199 } 00:28:15.199 ] 00:28:15.199 }' 00:28:15.199 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.199 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.467 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:28:15.467 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:15.467 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:15.467 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:15.467 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.467 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.467 [2024-12-09 23:09:50.589434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:15.467 [2024-12-09 23:09:50.589501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.467 [2024-12-09 23:09:50.589522] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:15.467 [2024-12-09 23:09:50.589534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.467 [2024-12-09 23:09:50.589698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.467 [2024-12-09 23:09:50.589715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:15.467 [2024-12-09 23:09:50.589763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:15.467 [2024-12-09 23:09:50.589785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:15.468 [2024-12-09 23:09:50.589867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:15.468 [2024-12-09 23:09:50.589878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:15.468 [2024-12-09 23:09:50.589940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:15.468 [2024-12-09 23:09:50.589999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:15.468 [2024-12-09 23:09:50.590006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:15.468 [2024-12-09 23:09:50.590064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.468 pt2 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.468 "name": "raid_bdev1", 00:28:15.468 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:15.468 "strip_size_kb": 0, 00:28:15.468 "state": "online", 00:28:15.468 "raid_level": "raid1", 00:28:15.468 "superblock": true, 00:28:15.468 "num_base_bdevs": 2, 00:28:15.468 "num_base_bdevs_discovered": 2, 00:28:15.468 "num_base_bdevs_operational": 2, 00:28:15.468 "base_bdevs_list": [ 00:28:15.468 { 00:28:15.468 "name": "pt1", 00:28:15.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.468 "is_configured": true, 00:28:15.468 "data_offset": 256, 00:28:15.468 "data_size": 7936 00:28:15.468 }, 00:28:15.468 { 00:28:15.468 "name": "pt2", 00:28:15.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.468 "is_configured": true, 00:28:15.468 "data_offset": 256, 00:28:15.468 "data_size": 7936 00:28:15.468 } 00:28:15.468 ] 00:28:15.468 }' 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.468 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:15.729 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.730 [2024-12-09 23:09:50.917794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.730 "name": "raid_bdev1", 00:28:15.730 "aliases": [ 00:28:15.730 "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc" 00:28:15.730 ], 00:28:15.730 "product_name": "Raid Volume", 00:28:15.730 "block_size": 4128, 00:28:15.730 "num_blocks": 7936, 00:28:15.730 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:15.730 "md_size": 32, 00:28:15.730 "md_interleave": true, 00:28:15.730 "dif_type": 0, 00:28:15.730 "assigned_rate_limits": { 00:28:15.730 "rw_ios_per_sec": 0, 00:28:15.730 "rw_mbytes_per_sec": 0, 00:28:15.730 "r_mbytes_per_sec": 0, 00:28:15.730 "w_mbytes_per_sec": 0 00:28:15.730 }, 00:28:15.730 "claimed": false, 00:28:15.730 "zoned": false, 00:28:15.730 "supported_io_types": { 00:28:15.730 "read": true, 00:28:15.730 "write": true, 00:28:15.730 "unmap": false, 00:28:15.730 "flush": false, 00:28:15.730 "reset": true, 00:28:15.730 "nvme_admin": false, 00:28:15.730 "nvme_io": false, 00:28:15.730 "nvme_io_md": false, 00:28:15.730 "write_zeroes": true, 00:28:15.730 "zcopy": false, 00:28:15.730 "get_zone_info": false, 00:28:15.730 "zone_management": false, 00:28:15.730 "zone_append": false, 00:28:15.730 "compare": false, 00:28:15.730 "compare_and_write": false, 00:28:15.730 "abort": false, 00:28:15.730 "seek_hole": false, 00:28:15.730 "seek_data": false, 00:28:15.730 "copy": false, 00:28:15.730 "nvme_iov_md": false 00:28:15.730 }, 00:28:15.730 "memory_domains": [ 00:28:15.730 { 00:28:15.730 "dma_device_id": "system", 00:28:15.730 "dma_device_type": 1 00:28:15.730 }, 00:28:15.730 { 00:28:15.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.730 "dma_device_type": 2 00:28:15.730 }, 00:28:15.730 { 00:28:15.730 "dma_device_id": "system", 00:28:15.730 "dma_device_type": 1 00:28:15.730 }, 00:28:15.730 { 00:28:15.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.730 "dma_device_type": 2 00:28:15.730 } 00:28:15.730 ], 00:28:15.730 "driver_specific": { 00:28:15.730 "raid": { 00:28:15.730 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:15.730 "strip_size_kb": 0, 00:28:15.730 "state": "online", 00:28:15.730 "raid_level": "raid1", 00:28:15.730 "superblock": true, 00:28:15.730 "num_base_bdevs": 2, 00:28:15.730 "num_base_bdevs_discovered": 2, 00:28:15.730 "num_base_bdevs_operational": 2, 00:28:15.730 "base_bdevs_list": [ 00:28:15.730 { 00:28:15.730 "name": "pt1", 00:28:15.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:15.730 "is_configured": true, 00:28:15.730 "data_offset": 256, 00:28:15.730 "data_size": 7936 00:28:15.730 }, 00:28:15.730 { 00:28:15.730 "name": "pt2", 00:28:15.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.730 "is_configured": true, 00:28:15.730 "data_offset": 256, 00:28:15.730 "data_size": 7936 00:28:15.730 } 00:28:15.730 ] 00:28:15.730 } 00:28:15.730 } 00:28:15.730 }' 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:15.730 pt2' 00:28:15.730 23:09:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:15.730 [2024-12-09 23:09:51.077834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.730 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.992 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' c55670b8-c39d-4cd6-838f-3ab9ed61f6bc '!=' c55670b8-c39d-4cd6-838f-3ab9ed61f6bc ']' 00:28:15.992 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:28:15.992 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:15.992 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:28:15.992 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:28:15.992 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.993 [2024-12-09 23:09:51.113588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.993 "name": "raid_bdev1", 00:28:15.993 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:15.993 "strip_size_kb": 0, 00:28:15.993 "state": "online", 00:28:15.993 "raid_level": "raid1", 00:28:15.993 "superblock": true, 00:28:15.993 "num_base_bdevs": 2, 00:28:15.993 "num_base_bdevs_discovered": 1, 00:28:15.993 "num_base_bdevs_operational": 1, 00:28:15.993 "base_bdevs_list": [ 00:28:15.993 { 00:28:15.993 "name": null, 00:28:15.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.993 "is_configured": false, 00:28:15.993 "data_offset": 0, 00:28:15.993 "data_size": 7936 00:28:15.993 }, 00:28:15.993 { 00:28:15.993 "name": "pt2", 00:28:15.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:15.993 "is_configured": true, 00:28:15.993 "data_offset": 256, 00:28:15.993 "data_size": 7936 00:28:15.993 } 00:28:15.993 ] 00:28:15.993 }' 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.993 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.254 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:16.254 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.254 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.254 [2024-12-09 23:09:51.437630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:16.255 [2024-12-09 23:09:51.437655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:16.255 [2024-12-09 23:09:51.437718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:16.255 [2024-12-09 23:09:51.437763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:16.255 [2024-12-09 23:09:51.437774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.255 [2024-12-09 23:09:51.485645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:16.255 [2024-12-09 23:09:51.485695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.255 [2024-12-09 23:09:51.485710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:16.255 [2024-12-09 23:09:51.485721] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.255 [2024-12-09 23:09:51.487650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.255 [2024-12-09 23:09:51.487686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:16.255 [2024-12-09 23:09:51.487735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:16.255 [2024-12-09 23:09:51.487780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:16.255 [2024-12-09 23:09:51.487840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:16.255 [2024-12-09 23:09:51.487851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:16.255 [2024-12-09 23:09:51.487931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:16.255 [2024-12-09 23:09:51.487987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:16.255 [2024-12-09 23:09:51.487994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:16.255 [2024-12-09 23:09:51.488053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.255 pt2 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.255 "name": "raid_bdev1", 00:28:16.255 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:16.255 "strip_size_kb": 0, 00:28:16.255 "state": "online", 00:28:16.255 "raid_level": "raid1", 00:28:16.255 "superblock": true, 00:28:16.255 "num_base_bdevs": 2, 00:28:16.255 "num_base_bdevs_discovered": 1, 00:28:16.255 "num_base_bdevs_operational": 1, 00:28:16.255 "base_bdevs_list": [ 00:28:16.255 { 00:28:16.255 "name": null, 00:28:16.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.255 "is_configured": false, 00:28:16.255 "data_offset": 256, 00:28:16.255 "data_size": 7936 00:28:16.255 }, 00:28:16.255 { 00:28:16.255 "name": "pt2", 00:28:16.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.255 "is_configured": true, 00:28:16.255 "data_offset": 256, 00:28:16.255 "data_size": 7936 00:28:16.255 } 00:28:16.255 ] 00:28:16.255 }' 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.255 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.515 [2024-12-09 23:09:51.805705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:16.515 [2024-12-09 23:09:51.805732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:16.515 [2024-12-09 23:09:51.805795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:16.515 [2024-12-09 23:09:51.805843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:16.515 [2024-12-09 23:09:51.805852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.515 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.515 [2024-12-09 23:09:51.853746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:16.515 [2024-12-09 23:09:51.853799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.515 [2024-12-09 23:09:51.853815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:28:16.515 [2024-12-09 23:09:51.853824] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.515 [2024-12-09 23:09:51.855779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.515 [2024-12-09 23:09:51.855910] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:16.515 [2024-12-09 23:09:51.855972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:16.515 [2024-12-09 23:09:51.856017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:16.515 [2024-12-09 23:09:51.856125] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:16.515 [2024-12-09 23:09:51.856135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:16.515 [2024-12-09 23:09:51.856154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:28:16.515 [2024-12-09 23:09:51.856203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:16.515 [2024-12-09 23:09:51.856273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:28:16.515 [2024-12-09 23:09:51.856282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:16.516 [2024-12-09 23:09:51.856349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:16.516 [2024-12-09 23:09:51.856404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:28:16.516 [2024-12-09 23:09:51.856414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:28:16.516 [2024-12-09 23:09:51.856493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.516 pt1 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.516 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.777 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.777 "name": "raid_bdev1", 00:28:16.777 "uuid": "c55670b8-c39d-4cd6-838f-3ab9ed61f6bc", 00:28:16.777 "strip_size_kb": 0, 00:28:16.777 "state": "online", 00:28:16.777 "raid_level": "raid1", 00:28:16.777 "superblock": true, 00:28:16.777 "num_base_bdevs": 2, 00:28:16.777 "num_base_bdevs_discovered": 1, 00:28:16.777 "num_base_bdevs_operational": 1, 00:28:16.777 "base_bdevs_list": [ 00:28:16.777 { 00:28:16.777 "name": null, 00:28:16.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.777 "is_configured": false, 00:28:16.778 "data_offset": 256, 00:28:16.778 "data_size": 7936 00:28:16.778 }, 00:28:16.778 { 00:28:16.778 "name": "pt2", 00:28:16.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:16.778 "is_configured": true, 00:28:16.778 "data_offset": 256, 00:28:16.778 "data_size": 7936 00:28:16.778 } 00:28:16.778 ] 00:28:16.778 }' 00:28:16.778 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.778 23:09:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.079 [2024-12-09 23:09:52.214054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' c55670b8-c39d-4cd6-838f-3ab9ed61f6bc '!=' c55670b8-c39d-4cd6-838f-3ab9ed61f6bc ']' 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 86151 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 86151 ']' 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 86151 00:28:17.079 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86151 00:28:17.080 killing process with pid 86151 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86151' 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 86151 00:28:17.080 [2024-12-09 23:09:52.263192] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:17.080 23:09:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 86151 00:28:17.080 [2024-12-09 23:09:52.263274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.080 [2024-12-09 23:09:52.263322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.080 [2024-12-09 23:09:52.263336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:28:17.080 [2024-12-09 23:09:52.393448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:18.023 23:09:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:28:18.023 00:28:18.023 real 0m4.472s 00:28:18.023 user 0m6.749s 00:28:18.023 sys 0m0.753s 00:28:18.023 23:09:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.023 23:09:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.023 ************************************ 00:28:18.023 END TEST raid_superblock_test_md_interleaved 00:28:18.023 ************************************ 00:28:18.023 23:09:53 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:18.023 23:09:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:18.023 23:09:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.023 23:09:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:18.023 ************************************ 00:28:18.023 START TEST raid_rebuild_test_sb_md_interleaved 00:28:18.023 ************************************ 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:18.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86457 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86457 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 86457 ']' 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.023 23:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.023 [2024-12-09 23:09:53.227938] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:18.023 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:18.023 Zero copy mechanism will not be used. 00:28:18.023 [2024-12-09 23:09:53.228256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86457 ] 00:28:18.285 [2024-12-09 23:09:53.383723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.285 [2024-12-09 23:09:53.468963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.285 [2024-12-09 23:09:53.579690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:18.285 [2024-12-09 23:09:53.579826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.857 BaseBdev1_malloc 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.857 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.857 [2024-12-09 23:09:54.155259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:18.857 [2024-12-09 23:09:54.155316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:18.857 [2024-12-09 23:09:54.155336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:18.857 [2024-12-09 23:09:54.155346] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:18.858 [2024-12-09 23:09:54.156930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:18.858 [2024-12-09 23:09:54.157084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:18.858 BaseBdev1 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.858 BaseBdev2_malloc 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:18.858 [2024-12-09 23:09:54.190542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:18.858 [2024-12-09 23:09:54.190588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:18.858 [2024-12-09 23:09:54.190603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:18.858 [2024-12-09 23:09:54.190612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:18.858 [2024-12-09 23:09:54.192204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:18.858 [2024-12-09 23:09:54.192234] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:18.858 BaseBdev2 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.858 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.119 spare_malloc 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.119 spare_delay 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.119 [2024-12-09 23:09:54.251379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:19.119 [2024-12-09 23:09:54.251426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.119 [2024-12-09 23:09:54.251442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:19.119 [2024-12-09 23:09:54.251451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.119 [2024-12-09 23:09:54.253017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.119 [2024-12-09 23:09:54.253053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:19.119 spare 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.119 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.119 [2024-12-09 23:09:54.259417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:19.119 [2024-12-09 23:09:54.260964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:19.119 [2024-12-09 23:09:54.261128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:19.119 [2024-12-09 23:09:54.261141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:19.120 [2024-12-09 23:09:54.261201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:19.120 [2024-12-09 23:09:54.261257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:19.120 [2024-12-09 23:09:54.261264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:19.120 [2024-12-09 23:09:54.261317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.120 "name": "raid_bdev1", 00:28:19.120 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:19.120 "strip_size_kb": 0, 00:28:19.120 "state": "online", 00:28:19.120 "raid_level": "raid1", 00:28:19.120 "superblock": true, 00:28:19.120 "num_base_bdevs": 2, 00:28:19.120 "num_base_bdevs_discovered": 2, 00:28:19.120 "num_base_bdevs_operational": 2, 00:28:19.120 "base_bdevs_list": [ 00:28:19.120 { 00:28:19.120 "name": "BaseBdev1", 00:28:19.120 "uuid": "f8f508d9-8bf1-5e57-be7a-736b23d7008e", 00:28:19.120 "is_configured": true, 00:28:19.120 "data_offset": 256, 00:28:19.120 "data_size": 7936 00:28:19.120 }, 00:28:19.120 { 00:28:19.120 "name": "BaseBdev2", 00:28:19.120 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:19.120 "is_configured": true, 00:28:19.120 "data_offset": 256, 00:28:19.120 "data_size": 7936 00:28:19.120 } 00:28:19.120 ] 00:28:19.120 }' 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.120 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.382 [2024-12-09 23:09:54.623737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.382 [2024-12-09 23:09:54.699494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.382 "name": "raid_bdev1", 00:28:19.382 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:19.382 "strip_size_kb": 0, 00:28:19.382 "state": "online", 00:28:19.382 "raid_level": "raid1", 00:28:19.382 "superblock": true, 00:28:19.382 "num_base_bdevs": 2, 00:28:19.382 "num_base_bdevs_discovered": 1, 00:28:19.382 "num_base_bdevs_operational": 1, 00:28:19.382 "base_bdevs_list": [ 00:28:19.382 { 00:28:19.382 "name": null, 00:28:19.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.382 "is_configured": false, 00:28:19.382 "data_offset": 0, 00:28:19.382 "data_size": 7936 00:28:19.382 }, 00:28:19.382 { 00:28:19.382 "name": "BaseBdev2", 00:28:19.382 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:19.382 "is_configured": true, 00:28:19.382 "data_offset": 256, 00:28:19.382 "data_size": 7936 00:28:19.382 } 00:28:19.382 ] 00:28:19.382 }' 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.382 23:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.953 23:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:19.953 23:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.953 23:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.953 [2024-12-09 23:09:55.019571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:19.953 [2024-12-09 23:09:55.029004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:19.953 23:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.953 23:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:19.953 [2024-12-09 23:09:55.030516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:20.894 "name": "raid_bdev1", 00:28:20.894 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:20.894 "strip_size_kb": 0, 00:28:20.894 "state": "online", 00:28:20.894 "raid_level": "raid1", 00:28:20.894 "superblock": true, 00:28:20.894 "num_base_bdevs": 2, 00:28:20.894 "num_base_bdevs_discovered": 2, 00:28:20.894 "num_base_bdevs_operational": 2, 00:28:20.894 "process": { 00:28:20.894 "type": "rebuild", 00:28:20.894 "target": "spare", 00:28:20.894 "progress": { 00:28:20.894 "blocks": 2560, 00:28:20.894 "percent": 32 00:28:20.894 } 00:28:20.894 }, 00:28:20.894 "base_bdevs_list": [ 00:28:20.894 { 00:28:20.894 "name": "spare", 00:28:20.894 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:20.894 "is_configured": true, 00:28:20.894 "data_offset": 256, 00:28:20.894 "data_size": 7936 00:28:20.894 }, 00:28:20.894 { 00:28:20.894 "name": "BaseBdev2", 00:28:20.894 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:20.894 "is_configured": true, 00:28:20.894 "data_offset": 256, 00:28:20.894 "data_size": 7936 00:28:20.894 } 00:28:20.894 ] 00:28:20.894 }' 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.894 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.894 [2024-12-09 23:09:56.136753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:20.894 [2024-12-09 23:09:56.235879] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:20.894 [2024-12-09 23:09:56.236083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:20.894 [2024-12-09 23:09:56.236159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:20.894 [2024-12-09 23:09:56.236186] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.157 "name": "raid_bdev1", 00:28:21.157 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:21.157 "strip_size_kb": 0, 00:28:21.157 "state": "online", 00:28:21.157 "raid_level": "raid1", 00:28:21.157 "superblock": true, 00:28:21.157 "num_base_bdevs": 2, 00:28:21.157 "num_base_bdevs_discovered": 1, 00:28:21.157 "num_base_bdevs_operational": 1, 00:28:21.157 "base_bdevs_list": [ 00:28:21.157 { 00:28:21.157 "name": null, 00:28:21.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.157 "is_configured": false, 00:28:21.157 "data_offset": 0, 00:28:21.157 "data_size": 7936 00:28:21.157 }, 00:28:21.157 { 00:28:21.157 "name": "BaseBdev2", 00:28:21.157 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:21.157 "is_configured": true, 00:28:21.157 "data_offset": 256, 00:28:21.157 "data_size": 7936 00:28:21.157 } 00:28:21.157 ] 00:28:21.157 }' 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.157 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.416 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:21.416 "name": "raid_bdev1", 00:28:21.416 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:21.416 "strip_size_kb": 0, 00:28:21.416 "state": "online", 00:28:21.416 "raid_level": "raid1", 00:28:21.416 "superblock": true, 00:28:21.416 "num_base_bdevs": 2, 00:28:21.416 "num_base_bdevs_discovered": 1, 00:28:21.416 "num_base_bdevs_operational": 1, 00:28:21.416 "base_bdevs_list": [ 00:28:21.416 { 00:28:21.416 "name": null, 00:28:21.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.417 "is_configured": false, 00:28:21.417 "data_offset": 0, 00:28:21.417 "data_size": 7936 00:28:21.417 }, 00:28:21.417 { 00:28:21.417 "name": "BaseBdev2", 00:28:21.417 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:21.417 "is_configured": true, 00:28:21.417 "data_offset": 256, 00:28:21.417 "data_size": 7936 00:28:21.417 } 00:28:21.417 ] 00:28:21.417 }' 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.417 [2024-12-09 23:09:56.691126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:21.417 [2024-12-09 23:09:56.700046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.417 23:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:21.417 [2024-12-09 23:09:56.701589] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.353 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:22.614 "name": "raid_bdev1", 00:28:22.614 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:22.614 "strip_size_kb": 0, 00:28:22.614 "state": "online", 00:28:22.614 "raid_level": "raid1", 00:28:22.614 "superblock": true, 00:28:22.614 "num_base_bdevs": 2, 00:28:22.614 "num_base_bdevs_discovered": 2, 00:28:22.614 "num_base_bdevs_operational": 2, 00:28:22.614 "process": { 00:28:22.614 "type": "rebuild", 00:28:22.614 "target": "spare", 00:28:22.614 "progress": { 00:28:22.614 "blocks": 2560, 00:28:22.614 "percent": 32 00:28:22.614 } 00:28:22.614 }, 00:28:22.614 "base_bdevs_list": [ 00:28:22.614 { 00:28:22.614 "name": "spare", 00:28:22.614 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:22.614 "is_configured": true, 00:28:22.614 "data_offset": 256, 00:28:22.614 "data_size": 7936 00:28:22.614 }, 00:28:22.614 { 00:28:22.614 "name": "BaseBdev2", 00:28:22.614 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:22.614 "is_configured": true, 00:28:22.614 "data_offset": 256, 00:28:22.614 "data_size": 7936 00:28:22.614 } 00:28:22.614 ] 00:28:22.614 }' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:22.614 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=605 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:22.614 "name": "raid_bdev1", 00:28:22.614 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:22.614 "strip_size_kb": 0, 00:28:22.614 "state": "online", 00:28:22.614 "raid_level": "raid1", 00:28:22.614 "superblock": true, 00:28:22.614 "num_base_bdevs": 2, 00:28:22.614 "num_base_bdevs_discovered": 2, 00:28:22.614 "num_base_bdevs_operational": 2, 00:28:22.614 "process": { 00:28:22.614 "type": "rebuild", 00:28:22.614 "target": "spare", 00:28:22.614 "progress": { 00:28:22.614 "blocks": 2816, 00:28:22.614 "percent": 35 00:28:22.614 } 00:28:22.614 }, 00:28:22.614 "base_bdevs_list": [ 00:28:22.614 { 00:28:22.614 "name": "spare", 00:28:22.614 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:22.614 "is_configured": true, 00:28:22.614 "data_offset": 256, 00:28:22.614 "data_size": 7936 00:28:22.614 }, 00:28:22.614 { 00:28:22.614 "name": "BaseBdev2", 00:28:22.614 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:22.614 "is_configured": true, 00:28:22.614 "data_offset": 256, 00:28:22.614 "data_size": 7936 00:28:22.614 } 00:28:22.614 ] 00:28:22.614 }' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.614 23:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.558 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.820 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:23.820 "name": "raid_bdev1", 00:28:23.820 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:23.820 "strip_size_kb": 0, 00:28:23.820 "state": "online", 00:28:23.820 "raid_level": "raid1", 00:28:23.820 "superblock": true, 00:28:23.820 "num_base_bdevs": 2, 00:28:23.820 "num_base_bdevs_discovered": 2, 00:28:23.820 "num_base_bdevs_operational": 2, 00:28:23.820 "process": { 00:28:23.820 "type": "rebuild", 00:28:23.820 "target": "spare", 00:28:23.820 "progress": { 00:28:23.820 "blocks": 5376, 00:28:23.820 "percent": 67 00:28:23.820 } 00:28:23.820 }, 00:28:23.820 "base_bdevs_list": [ 00:28:23.820 { 00:28:23.820 "name": "spare", 00:28:23.820 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:23.820 "is_configured": true, 00:28:23.820 "data_offset": 256, 00:28:23.820 "data_size": 7936 00:28:23.820 }, 00:28:23.820 { 00:28:23.820 "name": "BaseBdev2", 00:28:23.820 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:23.820 "is_configured": true, 00:28:23.820 "data_offset": 256, 00:28:23.820 "data_size": 7936 00:28:23.820 } 00:28:23.820 ] 00:28:23.820 }' 00:28:23.820 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:23.820 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:23.820 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:23.820 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:23.820 23:09:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:24.764 [2024-12-09 23:09:59.815455] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:24.764 [2024-12-09 23:09:59.815526] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:24.764 [2024-12-09 23:09:59.815624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:24.764 "name": "raid_bdev1", 00:28:24.764 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:24.764 "strip_size_kb": 0, 00:28:24.764 "state": "online", 00:28:24.764 "raid_level": "raid1", 00:28:24.764 "superblock": true, 00:28:24.764 "num_base_bdevs": 2, 00:28:24.764 "num_base_bdevs_discovered": 2, 00:28:24.764 "num_base_bdevs_operational": 2, 00:28:24.764 "base_bdevs_list": [ 00:28:24.764 { 00:28:24.764 "name": "spare", 00:28:24.764 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:24.764 "is_configured": true, 00:28:24.764 "data_offset": 256, 00:28:24.764 "data_size": 7936 00:28:24.764 }, 00:28:24.764 { 00:28:24.764 "name": "BaseBdev2", 00:28:24.764 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:24.764 "is_configured": true, 00:28:24.764 "data_offset": 256, 00:28:24.764 "data_size": 7936 00:28:24.764 } 00:28:24.764 ] 00:28:24.764 }' 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.764 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:25.024 "name": "raid_bdev1", 00:28:25.024 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:25.024 "strip_size_kb": 0, 00:28:25.024 "state": "online", 00:28:25.024 "raid_level": "raid1", 00:28:25.024 "superblock": true, 00:28:25.024 "num_base_bdevs": 2, 00:28:25.024 "num_base_bdevs_discovered": 2, 00:28:25.024 "num_base_bdevs_operational": 2, 00:28:25.024 "base_bdevs_list": [ 00:28:25.024 { 00:28:25.024 "name": "spare", 00:28:25.024 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:25.024 "is_configured": true, 00:28:25.024 "data_offset": 256, 00:28:25.024 "data_size": 7936 00:28:25.024 }, 00:28:25.024 { 00:28:25.024 "name": "BaseBdev2", 00:28:25.024 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:25.024 "is_configured": true, 00:28:25.024 "data_offset": 256, 00:28:25.024 "data_size": 7936 00:28:25.024 } 00:28:25.024 ] 00:28:25.024 }' 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.024 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.024 "name": "raid_bdev1", 00:28:25.024 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:25.024 "strip_size_kb": 0, 00:28:25.024 "state": "online", 00:28:25.024 "raid_level": "raid1", 00:28:25.024 "superblock": true, 00:28:25.024 "num_base_bdevs": 2, 00:28:25.024 "num_base_bdevs_discovered": 2, 00:28:25.024 "num_base_bdevs_operational": 2, 00:28:25.024 "base_bdevs_list": [ 00:28:25.024 { 00:28:25.024 "name": "spare", 00:28:25.024 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:25.024 "is_configured": true, 00:28:25.024 "data_offset": 256, 00:28:25.024 "data_size": 7936 00:28:25.024 }, 00:28:25.024 { 00:28:25.024 "name": "BaseBdev2", 00:28:25.024 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:25.024 "is_configured": true, 00:28:25.025 "data_offset": 256, 00:28:25.025 "data_size": 7936 00:28:25.025 } 00:28:25.025 ] 00:28:25.025 }' 00:28:25.025 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.025 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 [2024-12-09 23:10:00.514605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:25.286 [2024-12-09 23:10:00.514632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:25.286 [2024-12-09 23:10:00.514697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.286 [2024-12-09 23:10:00.514755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.286 [2024-12-09 23:10:00.514766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.286 [2024-12-09 23:10:00.562588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:25.286 [2024-12-09 23:10:00.562724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.286 [2024-12-09 23:10:00.562745] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:25.286 [2024-12-09 23:10:00.562753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.286 [2024-12-09 23:10:00.564391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.286 [2024-12-09 23:10:00.564419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:25.286 [2024-12-09 23:10:00.564466] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:25.286 [2024-12-09 23:10:00.564515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:25.286 [2024-12-09 23:10:00.564598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:25.286 spare 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.286 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.547 [2024-12-09 23:10:00.664671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:25.547 [2024-12-09 23:10:00.664710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:25.547 [2024-12-09 23:10:00.664808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:25.547 [2024-12-09 23:10:00.664890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:25.547 [2024-12-09 23:10:00.664898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:25.547 [2024-12-09 23:10:00.664978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.547 "name": "raid_bdev1", 00:28:25.547 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:25.547 "strip_size_kb": 0, 00:28:25.547 "state": "online", 00:28:25.547 "raid_level": "raid1", 00:28:25.547 "superblock": true, 00:28:25.547 "num_base_bdevs": 2, 00:28:25.547 "num_base_bdevs_discovered": 2, 00:28:25.547 "num_base_bdevs_operational": 2, 00:28:25.547 "base_bdevs_list": [ 00:28:25.547 { 00:28:25.547 "name": "spare", 00:28:25.547 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:25.547 "is_configured": true, 00:28:25.547 "data_offset": 256, 00:28:25.547 "data_size": 7936 00:28:25.547 }, 00:28:25.547 { 00:28:25.547 "name": "BaseBdev2", 00:28:25.547 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:25.547 "is_configured": true, 00:28:25.547 "data_offset": 256, 00:28:25.547 "data_size": 7936 00:28:25.547 } 00:28:25.547 ] 00:28:25.547 }' 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.547 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.807 23:10:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:25.807 "name": "raid_bdev1", 00:28:25.807 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:25.807 "strip_size_kb": 0, 00:28:25.807 "state": "online", 00:28:25.807 "raid_level": "raid1", 00:28:25.807 "superblock": true, 00:28:25.807 "num_base_bdevs": 2, 00:28:25.807 "num_base_bdevs_discovered": 2, 00:28:25.807 "num_base_bdevs_operational": 2, 00:28:25.807 "base_bdevs_list": [ 00:28:25.807 { 00:28:25.807 "name": "spare", 00:28:25.807 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:25.807 "is_configured": true, 00:28:25.807 "data_offset": 256, 00:28:25.807 "data_size": 7936 00:28:25.807 }, 00:28:25.807 { 00:28:25.807 "name": "BaseBdev2", 00:28:25.807 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:25.807 "is_configured": true, 00:28:25.807 "data_offset": 256, 00:28:25.807 "data_size": 7936 00:28:25.807 } 00:28:25.807 ] 00:28:25.807 }' 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.807 [2024-12-09 23:10:01.122795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.807 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.807 "name": "raid_bdev1", 00:28:25.807 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:25.807 "strip_size_kb": 0, 00:28:25.807 "state": "online", 00:28:25.807 "raid_level": "raid1", 00:28:25.807 "superblock": true, 00:28:25.807 "num_base_bdevs": 2, 00:28:25.807 "num_base_bdevs_discovered": 1, 00:28:25.807 "num_base_bdevs_operational": 1, 00:28:25.807 "base_bdevs_list": [ 00:28:25.807 { 00:28:25.807 "name": null, 00:28:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.807 "is_configured": false, 00:28:25.807 "data_offset": 0, 00:28:25.807 "data_size": 7936 00:28:25.807 }, 00:28:25.807 { 00:28:25.807 "name": "BaseBdev2", 00:28:25.807 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:25.807 "is_configured": true, 00:28:25.807 "data_offset": 256, 00:28:25.807 "data_size": 7936 00:28:25.807 } 00:28:25.808 ] 00:28:25.808 }' 00:28:25.808 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.808 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:26.378 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:26.378 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.378 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:26.378 [2024-12-09 23:10:01.450853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:26.378 [2024-12-09 23:10:01.450999] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:26.378 [2024-12-09 23:10:01.451012] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:26.378 [2024-12-09 23:10:01.451046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:26.378 [2024-12-09 23:10:01.460066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:26.378 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.378 23:10:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:26.378 [2024-12-09 23:10:01.461672] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:27.398 "name": "raid_bdev1", 00:28:27.398 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:27.398 "strip_size_kb": 0, 00:28:27.398 "state": "online", 00:28:27.398 "raid_level": "raid1", 00:28:27.398 "superblock": true, 00:28:27.398 "num_base_bdevs": 2, 00:28:27.398 "num_base_bdevs_discovered": 2, 00:28:27.398 "num_base_bdevs_operational": 2, 00:28:27.398 "process": { 00:28:27.398 "type": "rebuild", 00:28:27.398 "target": "spare", 00:28:27.398 "progress": { 00:28:27.398 "blocks": 2560, 00:28:27.398 "percent": 32 00:28:27.398 } 00:28:27.398 }, 00:28:27.398 "base_bdevs_list": [ 00:28:27.398 { 00:28:27.398 "name": "spare", 00:28:27.398 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:27.398 "is_configured": true, 00:28:27.398 "data_offset": 256, 00:28:27.398 "data_size": 7936 00:28:27.398 }, 00:28:27.398 { 00:28:27.398 "name": "BaseBdev2", 00:28:27.398 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:27.398 "is_configured": true, 00:28:27.398 "data_offset": 256, 00:28:27.398 "data_size": 7936 00:28:27.398 } 00:28:27.398 ] 00:28:27.398 }' 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:27.398 [2024-12-09 23:10:02.567903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:27.398 [2024-12-09 23:10:02.667026] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:27.398 [2024-12-09 23:10:02.667239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:27.398 [2024-12-09 23:10:02.667255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:27.398 [2024-12-09 23:10:02.667263] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.398 "name": "raid_bdev1", 00:28:27.398 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:27.398 "strip_size_kb": 0, 00:28:27.398 "state": "online", 00:28:27.398 "raid_level": "raid1", 00:28:27.398 "superblock": true, 00:28:27.398 "num_base_bdevs": 2, 00:28:27.398 "num_base_bdevs_discovered": 1, 00:28:27.398 "num_base_bdevs_operational": 1, 00:28:27.398 "base_bdevs_list": [ 00:28:27.398 { 00:28:27.398 "name": null, 00:28:27.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.398 "is_configured": false, 00:28:27.398 "data_offset": 0, 00:28:27.398 "data_size": 7936 00:28:27.398 }, 00:28:27.398 { 00:28:27.398 "name": "BaseBdev2", 00:28:27.398 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:27.398 "is_configured": true, 00:28:27.398 "data_offset": 256, 00:28:27.398 "data_size": 7936 00:28:27.398 } 00:28:27.398 ] 00:28:27.398 }' 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.398 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:27.661 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:27.661 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.661 23:10:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:27.661 [2024-12-09 23:10:02.997854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:27.661 [2024-12-09 23:10:02.997910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.661 [2024-12-09 23:10:02.997929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:27.661 [2024-12-09 23:10:02.997939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.661 [2024-12-09 23:10:02.998091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.661 [2024-12-09 23:10:02.998112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:27.661 [2024-12-09 23:10:02.998159] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:27.661 [2024-12-09 23:10:02.998170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:27.661 [2024-12-09 23:10:02.998177] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:27.661 [2024-12-09 23:10:02.998193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:27.661 [2024-12-09 23:10:03.006949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:27.661 spare 00:28:27.661 23:10:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.661 23:10:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:28:27.661 [2024-12-09 23:10:03.008548] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:29.047 "name": "raid_bdev1", 00:28:29.047 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:29.047 "strip_size_kb": 0, 00:28:29.047 "state": "online", 00:28:29.047 "raid_level": "raid1", 00:28:29.047 "superblock": true, 00:28:29.047 "num_base_bdevs": 2, 00:28:29.047 "num_base_bdevs_discovered": 2, 00:28:29.047 "num_base_bdevs_operational": 2, 00:28:29.047 "process": { 00:28:29.047 "type": "rebuild", 00:28:29.047 "target": "spare", 00:28:29.047 "progress": { 00:28:29.047 "blocks": 2560, 00:28:29.047 "percent": 32 00:28:29.047 } 00:28:29.047 }, 00:28:29.047 "base_bdevs_list": [ 00:28:29.047 { 00:28:29.047 "name": "spare", 00:28:29.047 "uuid": "e6270576-61e4-5635-a304-e393ac3795f0", 00:28:29.047 "is_configured": true, 00:28:29.047 "data_offset": 256, 00:28:29.047 "data_size": 7936 00:28:29.047 }, 00:28:29.047 { 00:28:29.047 "name": "BaseBdev2", 00:28:29.047 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:29.047 "is_configured": true, 00:28:29.047 "data_offset": 256, 00:28:29.047 "data_size": 7936 00:28:29.047 } 00:28:29.047 ] 00:28:29.047 }' 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.047 [2024-12-09 23:10:04.110885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:29.047 [2024-12-09 23:10:04.113607] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:29.047 [2024-12-09 23:10:04.113750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:29.047 [2024-12-09 23:10:04.113768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:29.047 [2024-12-09 23:10:04.113775] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:29.047 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:29.048 "name": "raid_bdev1", 00:28:29.048 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:29.048 "strip_size_kb": 0, 00:28:29.048 "state": "online", 00:28:29.048 "raid_level": "raid1", 00:28:29.048 "superblock": true, 00:28:29.048 "num_base_bdevs": 2, 00:28:29.048 "num_base_bdevs_discovered": 1, 00:28:29.048 "num_base_bdevs_operational": 1, 00:28:29.048 "base_bdevs_list": [ 00:28:29.048 { 00:28:29.048 "name": null, 00:28:29.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.048 "is_configured": false, 00:28:29.048 "data_offset": 0, 00:28:29.048 "data_size": 7936 00:28:29.048 }, 00:28:29.048 { 00:28:29.048 "name": "BaseBdev2", 00:28:29.048 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:29.048 "is_configured": true, 00:28:29.048 "data_offset": 256, 00:28:29.048 "data_size": 7936 00:28:29.048 } 00:28:29.048 ] 00:28:29.048 }' 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:29.048 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:29.310 "name": "raid_bdev1", 00:28:29.310 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:29.310 "strip_size_kb": 0, 00:28:29.310 "state": "online", 00:28:29.310 "raid_level": "raid1", 00:28:29.310 "superblock": true, 00:28:29.310 "num_base_bdevs": 2, 00:28:29.310 "num_base_bdevs_discovered": 1, 00:28:29.310 "num_base_bdevs_operational": 1, 00:28:29.310 "base_bdevs_list": [ 00:28:29.310 { 00:28:29.310 "name": null, 00:28:29.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.310 "is_configured": false, 00:28:29.310 "data_offset": 0, 00:28:29.310 "data_size": 7936 00:28:29.310 }, 00:28:29.310 { 00:28:29.310 "name": "BaseBdev2", 00:28:29.310 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:29.310 "is_configured": true, 00:28:29.310 "data_offset": 256, 00:28:29.310 "data_size": 7936 00:28:29.310 } 00:28:29.310 ] 00:28:29.310 }' 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.310 [2024-12-09 23:10:04.532526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:29.310 [2024-12-09 23:10:04.532569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.310 [2024-12-09 23:10:04.532585] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:29.310 [2024-12-09 23:10:04.532593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.310 [2024-12-09 23:10:04.532727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.310 [2024-12-09 23:10:04.532736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:29.310 [2024-12-09 23:10:04.532774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:29.310 [2024-12-09 23:10:04.532784] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:29.310 [2024-12-09 23:10:04.532792] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:29.310 [2024-12-09 23:10:04.532799] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:29.310 BaseBdev1 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.310 23:10:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.362 "name": "raid_bdev1", 00:28:30.362 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:30.362 "strip_size_kb": 0, 00:28:30.362 "state": "online", 00:28:30.362 "raid_level": "raid1", 00:28:30.362 "superblock": true, 00:28:30.362 "num_base_bdevs": 2, 00:28:30.362 "num_base_bdevs_discovered": 1, 00:28:30.362 "num_base_bdevs_operational": 1, 00:28:30.362 "base_bdevs_list": [ 00:28:30.362 { 00:28:30.362 "name": null, 00:28:30.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.362 "is_configured": false, 00:28:30.362 "data_offset": 0, 00:28:30.362 "data_size": 7936 00:28:30.362 }, 00:28:30.362 { 00:28:30.362 "name": "BaseBdev2", 00:28:30.362 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:30.362 "is_configured": true, 00:28:30.362 "data_offset": 256, 00:28:30.362 "data_size": 7936 00:28:30.362 } 00:28:30.362 ] 00:28:30.362 }' 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.362 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:30.623 "name": "raid_bdev1", 00:28:30.623 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:30.623 "strip_size_kb": 0, 00:28:30.623 "state": "online", 00:28:30.623 "raid_level": "raid1", 00:28:30.623 "superblock": true, 00:28:30.623 "num_base_bdevs": 2, 00:28:30.623 "num_base_bdevs_discovered": 1, 00:28:30.623 "num_base_bdevs_operational": 1, 00:28:30.623 "base_bdevs_list": [ 00:28:30.623 { 00:28:30.623 "name": null, 00:28:30.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.623 "is_configured": false, 00:28:30.623 "data_offset": 0, 00:28:30.623 "data_size": 7936 00:28:30.623 }, 00:28:30.623 { 00:28:30.623 "name": "BaseBdev2", 00:28:30.623 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:30.623 "is_configured": true, 00:28:30.623 "data_offset": 256, 00:28:30.623 "data_size": 7936 00:28:30.623 } 00:28:30.623 ] 00:28:30.623 }' 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:30.623 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:30.624 [2024-12-09 23:10:05.952853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:30.624 [2024-12-09 23:10:05.952969] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:30.624 [2024-12-09 23:10:05.952983] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:30.624 request: 00:28:30.624 { 00:28:30.624 "base_bdev": "BaseBdev1", 00:28:30.624 "raid_bdev": "raid_bdev1", 00:28:30.624 "method": "bdev_raid_add_base_bdev", 00:28:30.624 "req_id": 1 00:28:30.624 } 00:28:30.624 Got JSON-RPC error response 00:28:30.624 response: 00:28:30.624 { 00:28:30.624 "code": -22, 00:28:30.624 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:30.624 } 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:30.624 23:10:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:32.008 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:32.009 "name": "raid_bdev1", 00:28:32.009 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:32.009 "strip_size_kb": 0, 00:28:32.009 "state": "online", 00:28:32.009 "raid_level": "raid1", 00:28:32.009 "superblock": true, 00:28:32.009 "num_base_bdevs": 2, 00:28:32.009 "num_base_bdevs_discovered": 1, 00:28:32.009 "num_base_bdevs_operational": 1, 00:28:32.009 "base_bdevs_list": [ 00:28:32.009 { 00:28:32.009 "name": null, 00:28:32.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.009 "is_configured": false, 00:28:32.009 "data_offset": 0, 00:28:32.009 "data_size": 7936 00:28:32.009 }, 00:28:32.009 { 00:28:32.009 "name": "BaseBdev2", 00:28:32.009 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:32.009 "is_configured": true, 00:28:32.009 "data_offset": 256, 00:28:32.009 "data_size": 7936 00:28:32.009 } 00:28:32.009 ] 00:28:32.009 }' 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:32.009 23:10:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:32.009 "name": "raid_bdev1", 00:28:32.009 "uuid": "07e3ab5e-e603-4046-ba75-3765884930f3", 00:28:32.009 "strip_size_kb": 0, 00:28:32.009 "state": "online", 00:28:32.009 "raid_level": "raid1", 00:28:32.009 "superblock": true, 00:28:32.009 "num_base_bdevs": 2, 00:28:32.009 "num_base_bdevs_discovered": 1, 00:28:32.009 "num_base_bdevs_operational": 1, 00:28:32.009 "base_bdevs_list": [ 00:28:32.009 { 00:28:32.009 "name": null, 00:28:32.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.009 "is_configured": false, 00:28:32.009 "data_offset": 0, 00:28:32.009 "data_size": 7936 00:28:32.009 }, 00:28:32.009 { 00:28:32.009 "name": "BaseBdev2", 00:28:32.009 "uuid": "03c26711-79b3-5a0f-8cc8-c3b27b812ead", 00:28:32.009 "is_configured": true, 00:28:32.009 "data_offset": 256, 00:28:32.009 "data_size": 7936 00:28:32.009 } 00:28:32.009 ] 00:28:32.009 }' 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86457 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 86457 ']' 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 86457 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.009 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86457 00:28:32.269 killing process with pid 86457 00:28:32.269 Received shutdown signal, test time was about 60.000000 seconds 00:28:32.269 00:28:32.269 Latency(us) 00:28:32.269 [2024-12-09T23:10:07.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.269 [2024-12-09T23:10:07.633Z] =================================================================================================================== 00:28:32.270 [2024-12-09T23:10:07.633Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:32.270 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:32.270 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:32.270 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86457' 00:28:32.270 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 86457 00:28:32.270 [2024-12-09 23:10:07.384502] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:32.270 23:10:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 86457 00:28:32.270 [2024-12-09 23:10:07.384597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.270 [2024-12-09 23:10:07.384634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:32.270 [2024-12-09 23:10:07.384643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:32.270 [2024-12-09 23:10:07.532436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:32.841 ************************************ 00:28:32.841 END TEST raid_rebuild_test_sb_md_interleaved 00:28:32.841 ************************************ 00:28:32.841 23:10:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:28:32.841 00:28:32.841 real 0m14.952s 00:28:32.841 user 0m19.047s 00:28:32.841 sys 0m1.068s 00:28:32.841 23:10:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.841 23:10:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:32.841 23:10:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:28:32.841 23:10:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:28:32.841 23:10:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86457 ']' 00:28:32.841 23:10:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86457 00:28:32.841 23:10:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:28:32.841 ************************************ 00:28:32.841 END TEST bdev_raid 00:28:32.841 ************************************ 00:28:32.841 00:28:32.841 real 9m46.057s 00:28:32.841 user 12m54.703s 00:28:32.841 sys 1m25.557s 00:28:32.841 23:10:08 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.841 23:10:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:33.113 23:10:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:33.113 23:10:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.113 23:10:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.113 23:10:08 -- common/autotest_common.sh@10 -- # set +x 00:28:33.113 ************************************ 00:28:33.113 START TEST spdkcli_raid 00:28:33.113 ************************************ 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:33.113 * Looking for test storage... 00:28:33.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.113 23:10:08 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:33.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.113 --rc genhtml_branch_coverage=1 00:28:33.113 --rc genhtml_function_coverage=1 00:28:33.113 --rc genhtml_legend=1 00:28:33.113 --rc geninfo_all_blocks=1 00:28:33.113 --rc geninfo_unexecuted_blocks=1 00:28:33.113 00:28:33.113 ' 00:28:33.113 23:10:08 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:33.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.114 --rc genhtml_branch_coverage=1 00:28:33.114 --rc genhtml_function_coverage=1 00:28:33.114 --rc genhtml_legend=1 00:28:33.114 --rc geninfo_all_blocks=1 00:28:33.114 --rc geninfo_unexecuted_blocks=1 00:28:33.114 00:28:33.114 ' 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:33.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.114 --rc genhtml_branch_coverage=1 00:28:33.114 --rc genhtml_function_coverage=1 00:28:33.114 --rc genhtml_legend=1 00:28:33.114 --rc geninfo_all_blocks=1 00:28:33.114 --rc geninfo_unexecuted_blocks=1 00:28:33.114 00:28:33.114 ' 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:33.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.114 --rc genhtml_branch_coverage=1 00:28:33.114 --rc genhtml_function_coverage=1 00:28:33.114 --rc genhtml_legend=1 00:28:33.114 --rc geninfo_all_blocks=1 00:28:33.114 --rc geninfo_unexecuted_blocks=1 00:28:33.114 00:28:33.114 ' 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:33.114 23:10:08 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:28:33.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=87111 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 87111 00:28:33.114 23:10:08 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 87111 ']' 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.114 23:10:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:33.114 [2024-12-09 23:10:08.440833] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:33.114 [2024-12-09 23:10:08.440965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87111 ] 00:28:33.375 [2024-12-09 23:10:08.604677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:33.636 [2024-12-09 23:10:08.754300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.636 [2024-12-09 23:10:08.754497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.208 23:10:09 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.208 23:10:09 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:28:34.208 23:10:09 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:28:34.208 23:10:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.208 23:10:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:34.208 23:10:09 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:28:34.208 23:10:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.208 23:10:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:34.208 23:10:09 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:34.208 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:34.208 ' 00:28:35.594 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:28:35.594 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:28:35.865 23:10:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:28:35.865 23:10:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.865 23:10:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:35.865 23:10:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:28:35.865 23:10:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.865 23:10:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:35.865 23:10:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:28:35.865 ' 00:28:36.811 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:28:36.811 23:10:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:28:36.811 23:10:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.811 23:10:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:36.811 23:10:12 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:28:36.811 23:10:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.811 23:10:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:37.070 23:10:12 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:28:37.070 23:10:12 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:28:37.329 23:10:12 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:28:37.329 23:10:12 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:28:37.329 23:10:12 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:28:37.329 23:10:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.329 23:10:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:37.329 23:10:12 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:28:37.329 23:10:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.329 23:10:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:37.329 23:10:12 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:28:37.329 ' 00:28:38.271 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:28:38.533 23:10:13 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:28:38.533 23:10:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.533 23:10:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:38.533 23:10:13 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:28:38.533 23:10:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.533 23:10:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:38.533 23:10:13 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:28:38.533 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:28:38.533 ' 00:28:39.920 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:28:39.920 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:28:39.920 23:10:15 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:39.920 23:10:15 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 87111 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 87111 ']' 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 87111 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87111 00:28:39.920 killing process with pid 87111 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87111' 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 87111 00:28:39.920 23:10:15 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 87111 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 87111 ']' 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 87111 00:28:41.304 23:10:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 87111 ']' 00:28:41.304 23:10:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 87111 00:28:41.304 Process with pid 87111 is not found 00:28:41.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87111) - No such process 00:28:41.304 23:10:16 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 87111 is not found' 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:41.304 23:10:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:41.304 ************************************ 00:28:41.304 END TEST spdkcli_raid 00:28:41.304 ************************************ 00:28:41.304 00:28:41.305 real 0m8.210s 00:28:41.305 user 0m17.061s 00:28:41.305 sys 0m0.733s 00:28:41.305 23:10:16 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.305 23:10:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:41.305 23:10:16 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:41.305 23:10:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:41.305 23:10:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.305 23:10:16 -- common/autotest_common.sh@10 -- # set +x 00:28:41.305 ************************************ 00:28:41.305 START TEST blockdev_raid5f 00:28:41.305 ************************************ 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:41.305 * Looking for test storage... 00:28:41.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.305 23:10:16 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.305 --rc genhtml_branch_coverage=1 00:28:41.305 --rc genhtml_function_coverage=1 00:28:41.305 --rc genhtml_legend=1 00:28:41.305 --rc geninfo_all_blocks=1 00:28:41.305 --rc geninfo_unexecuted_blocks=1 00:28:41.305 00:28:41.305 ' 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.305 --rc genhtml_branch_coverage=1 00:28:41.305 --rc genhtml_function_coverage=1 00:28:41.305 --rc genhtml_legend=1 00:28:41.305 --rc geninfo_all_blocks=1 00:28:41.305 --rc geninfo_unexecuted_blocks=1 00:28:41.305 00:28:41.305 ' 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.305 --rc genhtml_branch_coverage=1 00:28:41.305 --rc genhtml_function_coverage=1 00:28:41.305 --rc genhtml_legend=1 00:28:41.305 --rc geninfo_all_blocks=1 00:28:41.305 --rc geninfo_unexecuted_blocks=1 00:28:41.305 00:28:41.305 ' 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.305 --rc genhtml_branch_coverage=1 00:28:41.305 --rc genhtml_function_coverage=1 00:28:41.305 --rc genhtml_legend=1 00:28:41.305 --rc geninfo_all_blocks=1 00:28:41.305 --rc geninfo_unexecuted_blocks=1 00:28:41.305 00:28:41.305 ' 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87369 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87369 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 87369 ']' 00:28:41.305 23:10:16 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.305 23:10:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:41.566 [2024-12-09 23:10:16.690191] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:41.566 [2024-12-09 23:10:16.690486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87369 ] 00:28:41.566 [2024-12-09 23:10:16.849259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.827 [2024-12-09 23:10:16.954687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.397 Malloc0 00:28:42.397 Malloc1 00:28:42.397 Malloc2 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:28:42.397 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.397 23:10:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:42.398 23:10:17 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.398 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:28:42.398 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7afb72ef-5707-4f31-b484-a3d33190b3a3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7afb72ef-5707-4f31-b484-a3d33190b3a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7afb72ef-5707-4f31-b484-a3d33190b3a3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0bf02c08-8bfb-4b45-bc2e-abc834f04c00",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4150c014-26c8-467d-907f-fc221c7ebce7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0a34f968-ff0c-402f-8c52-e040799461e0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:42.398 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:28:42.658 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:28:42.658 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:28:42.658 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:28:42.658 23:10:17 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 87369 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 87369 ']' 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 87369 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87369 00:28:42.658 killing process with pid 87369 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87369' 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 87369 00:28:42.658 23:10:17 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 87369 00:28:44.639 23:10:19 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:44.639 23:10:19 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:44.639 23:10:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:44.639 23:10:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.639 23:10:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:44.639 ************************************ 00:28:44.639 START TEST bdev_hello_world 00:28:44.639 ************************************ 00:28:44.639 23:10:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:44.639 [2024-12-09 23:10:19.579696] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:44.639 [2024-12-09 23:10:19.579953] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87425 ] 00:28:44.639 [2024-12-09 23:10:19.737885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.639 [2024-12-09 23:10:19.839199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.898 [2024-12-09 23:10:20.227840] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:44.898 [2024-12-09 23:10:20.228018] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:28:44.898 [2024-12-09 23:10:20.228059] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:44.898 [2024-12-09 23:10:20.228602] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:44.898 [2024-12-09 23:10:20.228728] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:44.898 [2024-12-09 23:10:20.228747] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:44.898 [2024-12-09 23:10:20.228801] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:44.898 00:28:44.898 [2024-12-09 23:10:20.228818] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:45.868 00:28:45.868 real 0m1.592s 00:28:45.868 user 0m1.297s 00:28:45.868 sys 0m0.176s 00:28:45.868 ************************************ 00:28:45.868 END TEST bdev_hello_world 00:28:45.868 ************************************ 00:28:45.868 23:10:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.868 23:10:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:45.868 23:10:21 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:28:45.868 23:10:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:45.868 23:10:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.868 23:10:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:45.868 ************************************ 00:28:45.868 START TEST bdev_bounds 00:28:45.868 ************************************ 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:28:45.868 Process bdevio pid: 87462 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87462 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87462' 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87462 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 87462 ']' 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:45.868 23:10:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:46.161 [2024-12-09 23:10:21.214926] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:46.161 [2024-12-09 23:10:21.215055] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87462 ] 00:28:46.161 [2024-12-09 23:10:21.378181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.161 [2024-12-09 23:10:21.502211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.161 [2024-12-09 23:10:21.502321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.161 [2024-12-09 23:10:21.502479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.733 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.734 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:28:46.734 23:10:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:46.994 I/O targets: 00:28:46.994 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:28:46.994 00:28:46.994 00:28:46.994 CUnit - A unit testing framework for C - Version 2.1-3 00:28:46.994 http://cunit.sourceforge.net/ 00:28:46.994 00:28:46.994 00:28:46.994 Suite: bdevio tests on: raid5f 00:28:46.994 Test: blockdev write read block ...passed 00:28:46.994 Test: blockdev write zeroes read block ...passed 00:28:46.994 Test: blockdev write zeroes read no split ...passed 00:28:46.994 Test: blockdev write zeroes read split ...passed 00:28:46.994 Test: blockdev write zeroes read split partial ...passed 00:28:46.994 Test: blockdev reset ...passed 00:28:46.994 Test: blockdev write read 8 blocks ...passed 00:28:47.255 Test: blockdev write read size > 128k ...passed 00:28:47.255 Test: blockdev write read invalid size ...passed 00:28:47.255 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:47.255 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:47.255 Test: blockdev write read max offset ...passed 00:28:47.255 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:47.255 Test: blockdev writev readv 8 blocks ...passed 00:28:47.255 Test: blockdev writev readv 30 x 1block ...passed 00:28:47.255 Test: blockdev writev readv block ...passed 00:28:47.255 Test: blockdev writev readv size > 128k ...passed 00:28:47.255 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:47.255 Test: blockdev comparev and writev ...passed 00:28:47.255 Test: blockdev nvme passthru rw ...passed 00:28:47.255 Test: blockdev nvme passthru vendor specific ...passed 00:28:47.255 Test: blockdev nvme admin passthru ...passed 00:28:47.255 Test: blockdev copy ...passed 00:28:47.255 00:28:47.255 Run Summary: Type Total Ran Passed Failed Inactive 00:28:47.255 suites 1 1 n/a 0 0 00:28:47.255 tests 23 23 23 0 0 00:28:47.255 asserts 130 130 130 0 n/a 00:28:47.255 00:28:47.255 Elapsed time = 0.491 seconds 00:28:47.255 0 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87462 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 87462 ']' 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 87462 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87462 00:28:47.255 killing process with pid 87462 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87462' 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 87462 00:28:47.255 23:10:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 87462 00:28:48.198 ************************************ 00:28:48.198 END TEST bdev_bounds 00:28:48.198 ************************************ 00:28:48.198 23:10:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:48.198 00:28:48.198 real 0m2.143s 00:28:48.198 user 0m5.296s 00:28:48.198 sys 0m0.271s 00:28:48.198 23:10:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.198 23:10:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:48.198 23:10:23 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:48.198 23:10:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:48.198 23:10:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.198 23:10:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:48.198 ************************************ 00:28:48.198 START TEST bdev_nbd 00:28:48.198 ************************************ 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87516 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:48.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87516 /var/tmp/spdk-nbd.sock 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 87516 ']' 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.198 23:10:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:48.198 [2024-12-09 23:10:23.403368] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:28:48.198 [2024-12-09 23:10:23.403643] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.459 [2024-12-09 23:10:23.562376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.459 [2024-12-09 23:10:23.664241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:49.028 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:49.291 1+0 records in 00:28:49.291 1+0 records out 00:28:49.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254671 s, 16.1 MB/s 00:28:49.291 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:49.292 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:49.559 { 00:28:49.559 "nbd_device": "/dev/nbd0", 00:28:49.559 "bdev_name": "raid5f" 00:28:49.559 } 00:28:49.559 ]' 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:49.559 { 00:28:49.559 "nbd_device": "/dev/nbd0", 00:28:49.559 "bdev_name": "raid5f" 00:28:49.559 } 00:28:49.559 ]' 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:49.559 23:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:49.820 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:50.082 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:28:50.343 /dev/nbd0 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:50.343 1+0 records in 00:28:50.343 1+0 records out 00:28:50.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254216 s, 16.1 MB/s 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:50.343 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:50.603 { 00:28:50.603 "nbd_device": "/dev/nbd0", 00:28:50.603 "bdev_name": "raid5f" 00:28:50.603 } 00:28:50.603 ]' 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:50.603 { 00:28:50.603 "nbd_device": "/dev/nbd0", 00:28:50.603 "bdev_name": "raid5f" 00:28:50.603 } 00:28:50.603 ]' 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:50.603 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:50.604 256+0 records in 00:28:50.604 256+0 records out 00:28:50.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00710867 s, 148 MB/s 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:50.604 256+0 records in 00:28:50.604 256+0 records out 00:28:50.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282678 s, 37.1 MB/s 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:50.604 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:50.864 23:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:50.864 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:50.864 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:50.865 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:51.124 malloc_lvol_verify 00:28:51.124 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:51.384 639cbd42-4356-4713-9063-b6c43c390903 00:28:51.384 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:51.643 5c42a4fe-983f-476b-af52-3e08927f6c47 00:28:51.643 23:10:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:51.902 /dev/nbd0 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:51.902 mke2fs 1.47.0 (5-Feb-2023) 00:28:51.902 Discarding device blocks: 0/4096 done 00:28:51.902 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:51.902 00:28:51.902 Allocating group tables: 0/1 done 00:28:51.902 Writing inode tables: 0/1 done 00:28:51.902 Creating journal (1024 blocks): done 00:28:51.902 Writing superblocks and filesystem accounting information: 0/1 done 00:28:51.902 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:51.902 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87516 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 87516 ']' 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 87516 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87516 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:52.164 killing process with pid 87516 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87516' 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 87516 00:28:52.164 23:10:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 87516 00:28:53.104 23:10:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:53.104 00:28:53.104 real 0m4.801s 00:28:53.104 user 0m6.993s 00:28:53.104 sys 0m0.997s 00:28:53.104 23:10:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.104 23:10:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:53.104 ************************************ 00:28:53.104 END TEST bdev_nbd 00:28:53.104 ************************************ 00:28:53.104 23:10:28 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:28:53.105 23:10:28 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:28:53.105 23:10:28 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:28:53.105 23:10:28 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:28:53.105 23:10:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:53.105 23:10:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.105 23:10:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:53.105 ************************************ 00:28:53.105 START TEST bdev_fio 00:28:53.105 ************************************ 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:28:53.105 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:53.105 ************************************ 00:28:53.105 START TEST bdev_fio_rw_verify 00:28:53.105 ************************************ 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:53.105 23:10:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:53.105 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:53.105 fio-3.35 00:28:53.105 Starting 1 thread 00:29:05.351 00:29:05.351 job_raid5f: (groupid=0, jobs=1): err= 0: pid=87713: Mon Dec 9 23:10:39 2024 00:29:05.351 read: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(448MiB/10001msec) 00:29:05.351 slat (nsec): min=18297, max=80548, avg=21625.47, stdev=2844.54 00:29:05.351 clat (usec): min=9, max=529, avg=142.90, stdev=53.64 00:29:05.351 lat (usec): min=28, max=560, avg=164.53, stdev=54.48 00:29:05.351 clat percentiles (usec): 00:29:05.351 | 50.000th=[ 143], 99.000th=[ 258], 99.900th=[ 273], 99.990th=[ 388], 00:29:05.351 | 99.999th=[ 494] 00:29:05.351 write: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(462MiB/9866msec); 0 zone resets 00:29:05.351 slat (usec): min=7, max=812, avg=17.59, stdev= 3.80 00:29:05.351 clat (usec): min=54, max=1156, avg=317.16, stdev=51.77 00:29:05.351 lat (usec): min=70, max=1172, avg=334.74, stdev=53.40 00:29:05.351 clat percentiles (usec): 00:29:05.351 | 50.000th=[ 314], 99.000th=[ 424], 99.900th=[ 519], 99.990th=[ 619], 00:29:05.351 | 99.999th=[ 1139] 00:29:05.351 bw ( KiB/s): min=37832, max=54864, per=98.30%, avg=47155.37, stdev=5349.03, samples=19 00:29:05.351 iops : min= 9458, max=13716, avg=11788.84, stdev=1337.26, samples=19 00:29:05.351 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=13.61%, 250=39.33% 00:29:05.351 lat (usec) : 500=46.99%, 750=0.07% 00:29:05.351 lat (msec) : 2=0.01% 00:29:05.351 cpu : usr=99.22%, sys=0.21%, ctx=17, majf=0, minf=9462 00:29:05.351 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:05.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.351 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.351 issued rwts: total=114741,118319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.352 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:05.352 00:29:05.352 Run status group 0 (all jobs): 00:29:05.352 READ: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=448MiB (470MB), run=10001-10001msec 00:29:05.352 WRITE: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=462MiB (485MB), run=9866-9866msec 00:29:05.352 ----------------------------------------------------- 00:29:05.352 Suppressions used: 00:29:05.352 count bytes template 00:29:05.352 1 7 /usr/src/fio/parse.c 00:29:05.352 24 2304 /usr/src/fio/iolog.c 00:29:05.352 1 8 libtcmalloc_minimal.so 00:29:05.352 1 904 libcrypto.so 00:29:05.352 ----------------------------------------------------- 00:29:05.352 00:29:05.352 00:29:05.352 real 0m12.101s 00:29:05.352 user 0m12.838s 00:29:05.352 sys 0m0.520s 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 ************************************ 00:29:05.352 END TEST bdev_fio_rw_verify 00:29:05.352 ************************************ 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7afb72ef-5707-4f31-b484-a3d33190b3a3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7afb72ef-5707-4f31-b484-a3d33190b3a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7afb72ef-5707-4f31-b484-a3d33190b3a3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0bf02c08-8bfb-4b45-bc2e-abc834f04c00",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "4150c014-26c8-467d-907f-fc221c7ebce7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0a34f968-ff0c-402f-8c52-e040799461e0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:29:05.352 /home/vagrant/spdk_repo/spdk 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:29:05.352 00:29:05.352 real 0m12.269s 00:29:05.352 user 0m12.920s 00:29:05.352 sys 0m0.592s 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.352 ************************************ 00:29:05.352 23:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 END TEST bdev_fio 00:29:05.352 ************************************ 00:29:05.352 23:10:40 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:05.352 23:10:40 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:05.352 23:10:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:05.352 23:10:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.352 23:10:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:05.352 ************************************ 00:29:05.352 START TEST bdev_verify 00:29:05.352 ************************************ 00:29:05.352 23:10:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:05.352 [2024-12-09 23:10:40.545620] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:29:05.352 [2024-12-09 23:10:40.545736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87871 ] 00:29:05.352 [2024-12-09 23:10:40.704330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:05.613 [2024-12-09 23:10:40.808708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.613 [2024-12-09 23:10:40.808917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.873 Running I/O for 5 seconds... 00:29:08.202 16833.00 IOPS, 65.75 MiB/s [2024-12-09T23:10:44.233Z] 17509.00 IOPS, 68.39 MiB/s [2024-12-09T23:10:45.625Z] 17250.00 IOPS, 67.38 MiB/s [2024-12-09T23:10:46.568Z] 17427.25 IOPS, 68.08 MiB/s [2024-12-09T23:10:46.568Z] 18350.40 IOPS, 71.68 MiB/s 00:29:11.205 Latency(us) 00:29:11.205 [2024-12-09T23:10:46.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.205 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:11.205 Verification LBA range: start 0x0 length 0x2000 00:29:11.205 raid5f : 5.01 9197.91 35.93 0.00 0.00 20923.14 186.68 18854.20 00:29:11.205 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:11.205 Verification LBA range: start 0x2000 length 0x2000 00:29:11.205 raid5f : 5.01 9157.94 35.77 0.00 0.00 20879.67 182.74 23290.49 00:29:11.205 [2024-12-09T23:10:46.568Z] =================================================================================================================== 00:29:11.205 [2024-12-09T23:10:46.568Z] Total : 18355.84 71.70 0.00 0.00 20901.46 182.74 23290.49 00:29:11.782 00:29:11.782 real 0m6.469s 00:29:11.782 user 0m12.084s 00:29:11.782 sys 0m0.195s 00:29:11.782 23:10:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.782 23:10:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:11.782 ************************************ 00:29:11.782 END TEST bdev_verify 00:29:11.782 ************************************ 00:29:11.782 23:10:46 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:11.782 23:10:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:29:11.782 23:10:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.782 23:10:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:11.782 ************************************ 00:29:11.782 START TEST bdev_verify_big_io 00:29:11.782 ************************************ 00:29:11.782 23:10:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:11.782 [2024-12-09 23:10:47.061888] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:29:11.782 [2024-12-09 23:10:47.062017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87958 ] 00:29:12.044 [2024-12-09 23:10:47.220183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:12.044 [2024-12-09 23:10:47.323784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.044 [2024-12-09 23:10:47.324142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.617 Running I/O for 5 seconds... 00:29:14.582 1011.00 IOPS, 63.19 MiB/s [2024-12-09T23:10:50.887Z] 1046.00 IOPS, 65.38 MiB/s [2024-12-09T23:10:52.271Z] 1078.33 IOPS, 67.40 MiB/s [2024-12-09T23:10:52.842Z] 1173.75 IOPS, 73.36 MiB/s [2024-12-09T23:10:53.103Z] 1218.80 IOPS, 76.17 MiB/s 00:29:17.740 Latency(us) 00:29:17.740 [2024-12-09T23:10:53.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.740 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:17.740 Verification LBA range: start 0x0 length 0x200 00:29:17.740 raid5f : 5.20 585.67 36.60 0.00 0.00 5372361.48 138.63 290374.89 00:29:17.740 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:17.740 Verification LBA range: start 0x200 length 0x200 00:29:17.740 raid5f : 5.12 644.73 40.30 0.00 0.00 4881935.10 165.42 254884.63 00:29:17.740 [2024-12-09T23:10:53.103Z] =================================================================================================================== 00:29:17.740 [2024-12-09T23:10:53.103Z] Total : 1230.40 76.90 0.00 0.00 5117259.41 138.63 290374.89 00:29:18.683 00:29:18.683 real 0m6.683s 00:29:18.683 user 0m12.518s 00:29:18.683 sys 0m0.192s 00:29:18.683 23:10:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.683 23:10:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:18.683 ************************************ 00:29:18.683 END TEST bdev_verify_big_io 00:29:18.683 ************************************ 00:29:18.683 23:10:53 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:18.683 23:10:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:18.683 23:10:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.683 23:10:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:18.683 ************************************ 00:29:18.683 START TEST bdev_write_zeroes 00:29:18.683 ************************************ 00:29:18.684 23:10:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:18.684 [2024-12-09 23:10:53.786080] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:29:18.684 [2024-12-09 23:10:53.786214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88051 ] 00:29:18.684 [2024-12-09 23:10:53.941291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.684 [2024-12-09 23:10:54.028625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.260 Running I/O for 1 seconds... 00:29:20.209 27759.00 IOPS, 108.43 MiB/s 00:29:20.209 Latency(us) 00:29:20.209 [2024-12-09T23:10:55.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.209 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:20.209 raid5f : 1.01 27744.42 108.38 0.00 0.00 4599.86 1291.82 6251.13 00:29:20.209 [2024-12-09T23:10:55.572Z] =================================================================================================================== 00:29:20.209 [2024-12-09T23:10:55.572Z] Total : 27744.42 108.38 0.00 0.00 4599.86 1291.82 6251.13 00:29:20.787 00:29:20.787 real 0m2.394s 00:29:20.787 user 0m2.086s 00:29:20.787 sys 0m0.186s 00:29:20.787 23:10:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.787 23:10:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:20.787 ************************************ 00:29:20.787 END TEST bdev_write_zeroes 00:29:20.787 ************************************ 00:29:21.048 23:10:56 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:21.048 23:10:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:21.048 23:10:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.048 23:10:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:21.048 ************************************ 00:29:21.048 START TEST bdev_json_nonenclosed 00:29:21.048 ************************************ 00:29:21.048 23:10:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:21.048 [2024-12-09 23:10:56.213901] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:29:21.048 [2024-12-09 23:10:56.214001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88093 ] 00:29:21.048 [2024-12-09 23:10:56.366060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.310 [2024-12-09 23:10:56.450608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.310 [2024-12-09 23:10:56.450688] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:21.310 [2024-12-09 23:10:56.450707] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:21.310 [2024-12-09 23:10:56.450715] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:21.310 00:29:21.310 real 0m0.442s 00:29:21.310 user 0m0.244s 00:29:21.310 sys 0m0.094s 00:29:21.310 ************************************ 00:29:21.310 END TEST bdev_json_nonenclosed 00:29:21.310 ************************************ 00:29:21.310 23:10:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.310 23:10:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:21.310 23:10:56 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:21.310 23:10:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:21.310 23:10:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.310 23:10:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:21.310 ************************************ 00:29:21.310 START TEST bdev_json_nonarray 00:29:21.310 ************************************ 00:29:21.310 23:10:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:21.570 [2024-12-09 23:10:56.704885] Starting SPDK v25.01-pre git sha1 43c35d804 / DPDK 24.03.0 initialization... 00:29:21.570 [2024-12-09 23:10:56.705010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88119 ] 00:29:21.570 [2024-12-09 23:10:56.860776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.831 [2024-12-09 23:10:56.961355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.831 [2024-12-09 23:10:56.961443] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:21.831 [2024-12-09 23:10:56.961460] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:21.831 [2024-12-09 23:10:56.961474] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:21.831 00:29:21.831 real 0m0.500s 00:29:21.831 user 0m0.301s 00:29:21.831 sys 0m0.095s 00:29:21.831 ************************************ 00:29:21.831 END TEST bdev_json_nonarray 00:29:21.831 ************************************ 00:29:21.831 23:10:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.831 23:10:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:29:21.831 23:10:57 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:29:21.831 ************************************ 00:29:21.831 END TEST blockdev_raid5f 00:29:21.831 ************************************ 00:29:21.831 00:29:21.831 real 0m40.715s 00:29:21.831 user 0m56.869s 00:29:21.831 sys 0m3.503s 00:29:21.831 23:10:57 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.831 23:10:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:22.092 23:10:57 -- spdk/autotest.sh@194 -- # uname -s 00:29:22.092 23:10:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:29:22.092 23:10:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.092 23:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:22.092 23:10:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:22.092 23:10:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:22.092 23:10:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:22.092 23:10:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:22.092 23:10:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.092 23:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:22.092 23:10:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:22.092 23:10:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:22.092 23:10:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:22.092 23:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:23.096 INFO: APP EXITING 00:29:23.096 INFO: killing all VMs 00:29:23.096 INFO: killing vhost app 00:29:23.096 INFO: EXIT DONE 00:29:23.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:23.355 Waiting for block devices as requested 00:29:23.355 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:23.615 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:24.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:24.189 Cleaning 00:29:24.189 Removing: /var/run/dpdk/spdk0/config 00:29:24.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:24.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:24.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:24.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:24.189 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:24.189 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:24.189 Removing: /dev/shm/spdk_tgt_trace.pid56078 00:29:24.189 Removing: /var/run/dpdk/spdk0 00:29:24.189 Removing: /var/run/dpdk/spdk_pid55887 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56078 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56285 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56378 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56418 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56540 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56553 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56746 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56839 00:29:24.189 Removing: /var/run/dpdk/spdk_pid56935 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57041 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57132 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57172 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57203 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57279 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57379 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57810 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57874 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57926 00:29:24.189 Removing: /var/run/dpdk/spdk_pid57942 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58044 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58049 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58151 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58166 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58220 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58238 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58291 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58308 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58464 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58500 00:29:24.189 Removing: /var/run/dpdk/spdk_pid58584 00:29:24.189 Removing: /var/run/dpdk/spdk_pid59830 00:29:24.189 Removing: /var/run/dpdk/spdk_pid60026 00:29:24.189 Removing: /var/run/dpdk/spdk_pid60161 00:29:24.189 Removing: /var/run/dpdk/spdk_pid60780 00:29:24.189 Removing: /var/run/dpdk/spdk_pid60981 00:29:24.189 Removing: /var/run/dpdk/spdk_pid61121 00:29:24.189 Removing: /var/run/dpdk/spdk_pid61737 00:29:24.189 Removing: /var/run/dpdk/spdk_pid62053 00:29:24.189 Removing: /var/run/dpdk/spdk_pid62193 00:29:24.189 Removing: /var/run/dpdk/spdk_pid63523 00:29:24.189 Removing: /var/run/dpdk/spdk_pid63765 00:29:24.189 Removing: /var/run/dpdk/spdk_pid63905 00:29:24.189 Removing: /var/run/dpdk/spdk_pid65235 00:29:24.189 Removing: /var/run/dpdk/spdk_pid65478 00:29:24.189 Removing: /var/run/dpdk/spdk_pid65618 00:29:24.189 Removing: /var/run/dpdk/spdk_pid66959 00:29:24.189 Removing: /var/run/dpdk/spdk_pid67387 00:29:24.189 Removing: /var/run/dpdk/spdk_pid67517 00:29:24.189 Removing: /var/run/dpdk/spdk_pid68954 00:29:24.189 Removing: /var/run/dpdk/spdk_pid69213 00:29:24.189 Removing: /var/run/dpdk/spdk_pid69348 00:29:24.189 Removing: /var/run/dpdk/spdk_pid70778 00:29:24.189 Removing: /var/run/dpdk/spdk_pid71026 00:29:24.189 Removing: /var/run/dpdk/spdk_pid71166 00:29:24.189 Removing: /var/run/dpdk/spdk_pid72596 00:29:24.189 Removing: /var/run/dpdk/spdk_pid73061 00:29:24.189 Removing: /var/run/dpdk/spdk_pid73196 00:29:24.189 Removing: /var/run/dpdk/spdk_pid73329 00:29:24.189 Removing: /var/run/dpdk/spdk_pid73736 00:29:24.189 Removing: /var/run/dpdk/spdk_pid74437 00:29:24.189 Removing: /var/run/dpdk/spdk_pid74798 00:29:24.189 Removing: /var/run/dpdk/spdk_pid75459 00:29:24.189 Removing: /var/run/dpdk/spdk_pid75883 00:29:24.189 Removing: /var/run/dpdk/spdk_pid76618 00:29:24.189 Removing: /var/run/dpdk/spdk_pid77027 00:29:24.189 Removing: /var/run/dpdk/spdk_pid78903 00:29:24.189 Removing: /var/run/dpdk/spdk_pid79325 00:29:24.189 Removing: /var/run/dpdk/spdk_pid79747 00:29:24.189 Removing: /var/run/dpdk/spdk_pid81737 00:29:24.189 Removing: /var/run/dpdk/spdk_pid82201 00:29:24.189 Removing: /var/run/dpdk/spdk_pid82695 00:29:24.189 Removing: /var/run/dpdk/spdk_pid83731 00:29:24.189 Removing: /var/run/dpdk/spdk_pid84043 00:29:24.189 Removing: /var/run/dpdk/spdk_pid84941 00:29:24.189 Removing: /var/run/dpdk/spdk_pid85247 00:29:24.189 Removing: /var/run/dpdk/spdk_pid86151 00:29:24.189 Removing: /var/run/dpdk/spdk_pid86457 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87111 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87369 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87425 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87462 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87698 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87871 00:29:24.189 Removing: /var/run/dpdk/spdk_pid87958 00:29:24.189 Removing: /var/run/dpdk/spdk_pid88051 00:29:24.189 Removing: /var/run/dpdk/spdk_pid88093 00:29:24.189 Removing: /var/run/dpdk/spdk_pid88119 00:29:24.189 Clean 00:29:24.449 23:10:59 -- common/autotest_common.sh@1453 -- # return 0 00:29:24.449 23:10:59 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:24.449 23:10:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.449 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:29:24.449 23:10:59 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:24.449 23:10:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.449 23:10:59 -- common/autotest_common.sh@10 -- # set +x 00:29:24.449 23:10:59 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:24.449 23:10:59 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:24.449 23:10:59 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:24.449 23:10:59 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:24.449 23:10:59 -- spdk/autotest.sh@398 -- # hostname 00:29:24.449 23:10:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:24.449 geninfo: WARNING: invalid characters removed from testname! 00:29:51.070 23:11:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.070 23:11:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.640 23:11:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:54.181 23:11:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.732 23:11:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:59.284 23:11:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:01.201 23:11:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:01.201 23:11:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:01.201 23:11:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:01.201 23:11:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:01.201 23:11:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:01.201 23:11:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:01.201 + [[ -n 4988 ]] 00:30:01.201 + sudo kill 4988 00:30:01.212 [Pipeline] } 00:30:01.227 [Pipeline] // timeout 00:30:01.232 [Pipeline] } 00:30:01.246 [Pipeline] // stage 00:30:01.251 [Pipeline] } 00:30:01.266 [Pipeline] // catchError 00:30:01.275 [Pipeline] stage 00:30:01.277 [Pipeline] { (Stop VM) 00:30:01.290 [Pipeline] sh 00:30:01.576 + vagrant halt 00:30:04.126 ==> default: Halting domain... 00:30:07.451 [Pipeline] sh 00:30:07.739 + vagrant destroy -f 00:30:10.282 ==> default: Removing domain... 00:30:10.292 [Pipeline] sh 00:30:10.569 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:30:10.580 [Pipeline] } 00:30:10.596 [Pipeline] // stage 00:30:10.601 [Pipeline] } 00:30:10.615 [Pipeline] // dir 00:30:10.620 [Pipeline] } 00:30:10.633 [Pipeline] // wrap 00:30:10.639 [Pipeline] } 00:30:10.652 [Pipeline] // catchError 00:30:10.661 [Pipeline] stage 00:30:10.663 [Pipeline] { (Epilogue) 00:30:10.675 [Pipeline] sh 00:30:10.966 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:16.279 [Pipeline] catchError 00:30:16.281 [Pipeline] { 00:30:16.294 [Pipeline] sh 00:30:16.580 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:16.580 Artifacts sizes are good 00:30:16.590 [Pipeline] } 00:30:16.601 [Pipeline] // catchError 00:30:16.610 [Pipeline] archiveArtifacts 00:30:16.618 Archiving artifacts 00:30:16.728 [Pipeline] cleanWs 00:30:16.742 [WS-CLEANUP] Deleting project workspace... 00:30:16.742 [WS-CLEANUP] Deferred wipeout is used... 00:30:16.750 [WS-CLEANUP] done 00:30:16.752 [Pipeline] } 00:30:16.772 [Pipeline] // stage 00:30:16.778 [Pipeline] } 00:30:16.791 [Pipeline] // node 00:30:16.797 [Pipeline] End of Pipeline 00:30:16.843 Finished: SUCCESS